Has anyone done the shared binaries between
servers before? What are your thoughts?
We do this where I currently work. All data files and binaries are on
NAS/SAN devices and are separate from the db machines.
I thought these folks were crazy but I now 'get' why it is this way.
We have an OHOMEPROD mount that attaches to /u01/app/oracle and we
connect that to all our servers so that set of files is available
everywhere. We also have an OHOMEDEV mount that services dev/test.
Pros:
o Recovering from machine (or VM in your case) failure - mount the
Home and Datafiles mounts on another server, do a little config and
you are back up and running.
o Bringing newly installed machines into the 'fold' is a lot easier
than firing up OUI and waiting for the install to run. Even if you
have it scripted with a response file this can be painful to wait for.
o You are guaranteed to be running the same binaries everywhere. If
you need to move a DB From one server to another this is a big plus.
o We push a copy of our OHOME mounts to our remote DR site so we can
easily bring up any DB there as well.
o We have a /u01/app/oracle/script directory with our custom commonly
used shell scripts, it is nice to know they are available everywhere
and are all the same version no matter where I access them from.
Cons:
o Learning curve and just getting comfortable with it. I tiptoed
around the new-to-me config for a couple of months before I was
comfortable.
o Once a set of binaries is installed and pached we don't touch it. To
patch we create a new Oracle home. When multiple DBs use the same
patch level this is great, when they start to diverge it can get
unwieldy. Decide on a naming convention with some good descriptive
home directory names from the get-go.
o Get familiar with root.sh as you will probably need to leverage it
to use the binaries for the first time on a new machine/VM.
o Make a mistake in one place and it can bring everyone down. We hit
an OUI issue that started deleting all the files from our shared home.
One by one groups of systems started dying. We finally found the
process, stopped it, an restored our backup... and restarted a few
hundred databases... that was a hellish day for sure.
o Getting used to some heavy use of symbolic links. Our
/u01/app/oracle/admin/DBxx directories live on the OHOME mount... but
it only has symbolic links to directories on the shared storage.
o I would suggest different OHOME mounts for Dev/TEst and Prod. No
reason for a testing issue (like mentioned above) affecting Prod when
it doesn't need to.
This is what I can remember off the top of my head. If anyone has any
resources to share on the topic would love to check them out.