FAQ
Currently we are running 10g EE on SUSE10 on an IBM mainframe. Early
February we will be getting a new mainframe. The storage for all
databases will be unplugged from the old mainframe and plugged into
the new one. So far no issues, I've done this before.

The next step is to upgrade all databases to 11g. My lead SA informed
today that he wants to use the same oracle binaries for all database
VMs. I can see the benefits from his perspective but I don't think
that should be the overriding concern here. He doesn't manage the
oracle binaries, I do. Has anyone done the shared binaries between
servers before? What are your thoughts?

Search Discussions

  • David Mann at Jan 16, 2012 at 10:37 pm

    Has anyone done the shared binaries between
    servers before? What are your thoughts?
    We do this where I currently work. All data files and binaries are on
    NAS/SAN devices and are separate from the db machines.

    I thought these folks were crazy but I now 'get' why it is this way.
    We have an OHOMEPROD mount that attaches to /u01/app/oracle and we
    connect that to all our servers so that set of files is available
    everywhere. We also have an OHOMEDEV mount that services dev/test.

    Pros:
    o Recovering from machine (or VM in your case) failure - mount the
    Home and Datafiles mounts on another server, do a little config and
    you are back up and running.
    o Bringing newly installed machines into the 'fold' is a lot easier
    than firing up OUI and waiting for the install to run. Even if you
    have it scripted with a response file this can be painful to wait for.
    o You are guaranteed to be running the same binaries everywhere. If
    you need to move a DB From one server to another this is a big plus.
    o We push a copy of our OHOME mounts to our remote DR site so we can
    easily bring up any DB there as well.
    o We have a /u01/app/oracle/script directory with our custom commonly
    used shell scripts, it is nice to know they are available everywhere
    and are all the same version no matter where I access them from.

    Cons:
    o Learning curve and just getting comfortable with it. I tiptoed
    around the new-to-me config for a couple of months before I was
    comfortable.
    o Once a set of binaries is installed and pached we don't touch it. To
    patch we create a new Oracle home. When multiple DBs use the same
    patch level this is great, when they start to diverge it can get
    unwieldy. Decide on a naming convention with some good descriptive
    home directory names from the get-go.
    o Get familiar with root.sh as you will probably need to leverage it
    to use the binaries for the first time on a new machine/VM.
    o Make a mistake in one place and it can bring everyone down. We hit
    an OUI issue that started deleting all the files from our shared home.
    One by one groups of systems started dying. We finally found the
    process, stopped it, an restored our backup... and restarted a few
    hundred databases... that was a hellish day for sure.
    o Getting used to some heavy use of symbolic links. Our
    /u01/app/oracle/admin/DBxx directories live on the OHOME mount... but
    it only has symbolic links to directories on the shared storage.
    o I would suggest different OHOME mounts for Dev/TEst and Prod. No
    reason for a testing issue (like mentioned above) affecting Prod when
    it doesn't need to.

    This is what I can remember off the top of my head. If anyone has any
    resources to share on the topic would love to check them out.

    --
    Dave Mann
    www.brainio.us
    www.ba6.us - Database Stuff - http://www.ba6.us/rss.xml
    --
    http://www.freelists.org/webpage/oracle-l
  • De DBA at Jan 17, 2012 at 1:10 am
    One small problem that may arise with an arrangement like this is the files in $ORACLE_HOME/dbs. You must make doubly sure that no instance that shares the remote home has a duplicate name, lest it overwrites some other instance's memory map files or inadvertently reuse password and spfiles.

    There are also quite a few locations in the home where logfiles are written, e.g. by the configuration tools. How do you prevent these to become a jumble of different sources? (for a list of obvious locations `sudo find $ORACLE_HOME -type d -name log -print`)

    Cheers,
    Tony
    On 17/01/12 08:36, David Mann wrote:
    Has anyone done the shared binaries between
    servers before? What are your thoughts?
    We do this where I currently work. All data files and binaries are on
    NAS/SAN devices and are separate from the db machines.

    I thought these folks were crazy but I now 'get' why it is this way.
    We have an OHOMEPROD mount that attaches to /u01/app/oracle and we
    connect that to all our servers so that set of files is available
    everywhere. We also have an OHOMEDEV mount that services dev/test.

    Pros:
    <snip against overquoting>
    Cons:
    o Learning curve and just getting comfortable with it. I tiptoed
    around the new-to-me config for a couple of months before I was
    comfortable.
    o Once a set of binaries is installed and pached we don't touch it. To
    patch we create a new Oracle home. When multiple DBs use the same
    patch level this is great, when they start to diverge it can get
    unwieldy. Decide on a naming convention with some good descriptive
    home directory names from the get-go.
    o Get familiar with root.sh as you will probably need to leverage it
    to use the binaries for the first time on a new machine/VM.
    o Make a mistake in one place and it can bring everyone down. We hit
    an OUI issue that started deleting all the files from our shared home.
    One by one groups of systems started dying. We finally found the
    process, stopped it, an restored our backup... and restarted a few
    hundred databases... that was a hellish day for sure.
    o Getting used to some heavy use of symbolic links. Our
    /u01/app/oracle/admin/DBxx directories live on the OHOME mount... but
    it only has symbolic links to directories on the shared storage.
    o I would suggest different OHOME mounts for Dev/TEst and Prod. No
    reason for a testing issue (like mentioned above) affecting Prod when
    it doesn't need to.

    This is what I can remember off the top of my head. If anyone has any
    resources to share on the topic would love to check them out.
    --
    http://www.freelists.org/webpage/oracle-l
  • Sandra Becker at Jan 17, 2012 at 3:32 pm
    Thanks for the input. I just talked to the boss and he said that he
    would not support this kind of change right now given everything else
    we need to do.

    Sandy
  • David Mann at Jan 18, 2012 at 3:31 am

    On Tue, Jan 17, 2012 at 10:30 AM, Sandra Becker wrote:
    Thanks for the input.  I just talked to the boss and he said that he
    would not support this kind of change right now given everything else
    we need to do.

    Sandy
    Thanks for the update. Definitely not something to rush into if
    pressed for time.

    I found a couple of resources that cover shared homes on RAC if you
    end up pursuing it in the future.

    http://prodlife.wordpress.com/2008/06/19/oracle_home-to-share-or-not-to-share/

    http://www.oracle.com/technetwork/database/clustering/overview/oh-rac-133684.pdf

    -Dave

    --
    Dave Mann
    www.brainio.us
    www.ba6.us - Database Stuff - http://www.ba6.us/rss.xml
    --
    http://www.freelists.org/webpage/oracle-l
  • Maureen English at Feb 7, 2012 at 9:25 pm
    I'm in the process of creating a few databases on a new machine. I ran dbca and
    generated the scripts to create the new databases, then ran the scripts.

    What I found was that when I ran the scripts for the first database, it configured
    EM database control to use port 1158. When I ran the scripts for the second
    database, though, it configured EM database control to use port 5500.

    Isn't there a way to have just one instance of EM database control running on a
    machine, using port 1158, but then be able to monitor any database from that one
    instance?

    We're not using Grid Control.

    - Maureen
  • Guenadi Jilevski at Feb 7, 2012 at 9:30 pm
    Hi,

    No. OEM Database Control is per database.

    You will need OEM Grid Control in order to have one agent per node and
    monitor the whole node with all databases etc..

    Regards,

    Guenadi Jilevski

    On Tue, Feb 7, 2012 at 11:24 PM, Maureen English
    wrote:
    I'm in the process of creating a few databases on a new machine.  I ran dbca and
    generated the scripts to create the new databases, then ran the scripts.

    What I found was that when I ran the scripts for the first database, it configured
    EM database control to use port 1158.  When I ran the scripts for the second
    database, though, it configured EM database control to use port 5500.

    Isn't there a way to have just one instance of EM database control running on a
    machine, using port 1158, but then be able to monitor any database from that one
    instance?

    We're not using Grid Control.

    - Maureen
    --
    http://www.freelists.org/webpage/oracle-l
    --
    http://www.freelists.org/webpage/oracle-l
  • Kamran Agayev (ICT/SNO) at Feb 7, 2012 at 9:36 pm
    You need to install Grid Control. Check my video tutorial on how to install it and configure agents

    http://kamranagayev.com/2011/01/31/video-tutorial-installing-oracle-10gr2-grid-control-and-deploying-agent/



    ----- Original Message -----
    From: Maureen English
    Sent: Wednesday, February 08, 2012 01:24 AM
    To: [email protected] <[email protected]>
    Subject: EM database control question

    I'm in the process of creating a few databases on a new machine. I ran dbca and
    generated the scripts to create the new databases, then ran the scripts.

    What I found was that when I ran the scripts for the first database, it configured
    EM database control to use port 1158. When I ran the scripts for the second
    database, though, it configured EM database control to use port 5500.

    Isn't there a way to have just one instance of EM database control running on a
    machine, using port 1158, but then be able to monitor any database from that one
    instance?

    We're not using Grid Control.

    - Maureen
    --
    http://www.freelists.org/webpage/oracle-l


    --
    http://www.freelists.org/webpage/oracle-l
  • Radoulov, Dimitre at Feb 7, 2012 at 9:48 pm

    On 07/02/2012 22:24, Maureen English wrote:
    Isn't there a way to have just one instance of EM database control running on a
    machine, using port 1158, but then be able to monitor any database from that one
    instance?
    The official documentation states the following::

    Oracle® Enterprise Manager Concepts
    11g Release 11.1.0.1

    6 Database Management

    Database Control Versus Grid Control


    Enterprise Manager provides two separate consoles that you can use to
    monitor your database: Database Control and Grid Control.

    Database Control is the Enterprise Manager Web-based application
    for managing Oracle Database 11g Release 1 (11.1) and later. Database
    Control is installed and available with every Oracle Database 11g
    installation. From Database Control, you can monitor and administer a
    single Oracle Database instance or a clustered database.

    Grid Control is the Enterprise Manager console you use to centrally
    manage your entire Oracle environment. Within Grid Control, you access
    the multiple database targets using the Targets tab, then Databases.


    From:
    docs.oracle.com/cd/E11857_01/em.111/e11982/database_management.htm#DAFJDAEG


    Regards
    Dimitre
  • Maureen English at Feb 7, 2012 at 10:01 pm
    Thanks for the quick responses!

    I guess it was just wishful thinking that if the databases were all on the
    same machine I could use just one port. I'm sure that was possible with
    Oracle 10.1, but now we're on 11g and things have changed a lot.

    ...adding Grid Control to my list of things to do....

    - Maureen

    Radoulov, Dimitre wrote:
    On 07/02/2012 22:24, Maureen English wrote:
    Isn't there a way to have just one instance of EM database control
    running on a
    machine, using port 1158, but then be able to monitor any database
    from that one
    instance?
    The official documentation states the following::

    Oracle® Enterprise Manager Concepts
    11g Release 11.1.0.1

    6 Database Management

    Database Control Versus Grid Control


    Enterprise Manager provides two separate consoles that you can use to
    monitor your database: Database Control and Grid Control.

    Database Control is the Enterprise Manager Web-based application for
    managing Oracle Database 11g Release 1 (11.1) and later. Database
    Control is installed and available with every Oracle Database 11g
    installation. From Database Control, you can monitor and administer a
    single Oracle Database instance or a clustered database.

    Grid Control is the Enterprise Manager console you use to centrally
    manage your entire Oracle environment. Within Grid Control, you access
    the multiple database targets using the Targets tab, then Databases.


    From:
    docs.oracle.com/cd/E11857_01/em.111/e11982/database_management.htm#DAFJDAEG


    Regards
    Dimitre
    --
    http://www.freelists.org/webpage/oracle-l
  • Radoulov, Dimitre at Feb 7, 2012 at 10:13 pm

    On 07/02/2012 22:58, Maureen English wrote:
    I guess it was just wishful thinking that if the databases were all on the
    same machine I could use just one port. I'm sure that was possible with
    Oracle 10.1 [...]
    I suppose there was some undocumented (and unsupported way) of
    implementing the configuration you're talking about ...


    Database Control Console Versus Grid Control Console

    Enterprise Manager provides two configurations with which to monitor
    your database: Database Control Console and Grid Control Console.
    Database Control Console is the Enterprise Manager Web-based application
    for managing Oracle Database 10g Release 1 (10.1). The Database Control
    Console is installed and available with every Oracle Database 10g
    installation.

    From the Database Control Console, you can monitor and administer a
    single Oracle Database instance or a clustered database.

    The Grid Control Console is the Enterprise Manager console used for
    centrally managing your entire Oracle environment. Within Grid Control
    Console, you access the database targets using the Targets tab and
    clicking Databases.



    From: docs.oracle.com/html/B12016_02/chap_db_admin.htm#sthref242
  • Anthony Ballo at Jan 21, 2012 at 1:04 am
    Anyone running SOA 11g out there? Well, we just started working on a data
    purge strategy (PS4) using the Looped Purge script supplied by Oracle.
    Like others have written on the web, it doesn't delete everything so I put
    together a "manual purge" script that deletes the data and then does a
    SHRINK and DEALLOCATE UNUSED on various tables.

    My question is this: We have purged about 90% of the data in our
    tablespace and would now like to recover (shrink) the datafile(s) using:

    ALTER DATABASE DATAFILE '+DG1/tstsoa/datafile/dev_soainfra.310.767790105'
    RESIZE 5G;
    commit;

    But then receive this:

    ERROR:
    ORA-03297: file contains used data beyond requested RESIZE value


    Is there now a way to "coalesce" the data stored in the datafile so it is
    not fragmented? Or, is my only option to create a new tablespace and move
    all the Objects (tables and indexes) to the new - then rebuild (drop
    datafiles) and resize the Original tablespace and move the Objects back?


    Thanks,

    Anthony
  • Anthony sanchez at Jan 21, 2012 at 1:53 am
    Hi anthony,
    I believe you can enable row movement on the tables then use alter table shrink space /shrink space compact/shrink space cascade to "free up" the unused space.

    Anthony Sanchez

    On Jan 20, 2012, at 18:03, Anthony Ballo wrote:

    Anyone running SOA 11g out there? Well, we just started working on a data
    purge strategy (PS4) using the Looped Purge script supplied by Oracle.
    Like others have written on the web, it doesn't delete everything so I put
    together a "manual purge" script that deletes the data and then does a
    SHRINK and DEALLOCATE UNUSED on various tables.

    My question is this: We have purged about 90% of the data in our
    tablespace and would now like to recover (shrink) the datafile(s) using:

    ALTER DATABASE DATAFILE '+DG1/tstsoa/datafile/dev_soainfra.310.767790105'
    RESIZE 5G;
    commit;

    But then receive this:

    ERROR:
    ORA-03297: file contains used data beyond requested RESIZE value


    Is there now a way to "coalesce" the data stored in the datafile so it is
    not fragmented? Or, is my only option to create a new tablespace and move
    all the Objects (tables and indexes) to the new - then rebuild (drop
    datafiles) and resize the Original tablespace and move the Objects back?


    Thanks,

    Anthony


    --
    http://www.freelists.org/webpage/oracle-l
    --
    http://www.freelists.org/webpage/oracle-l
  • Sreejith S Nair at Jan 23, 2012 at 1:25 am
    Anthony,
    I recommend you to go through this post from Jonathan Lewis.

    http://jonathanlewis.wordpress.com/2010/02/06/shrink-tablespace/

    Sreejith,

    Sent from my iPhone
    On 21-Jan-2012, at 6:33 AM, Anthony Ballo wrote:

    Anyone running SOA 11g out there? Well, we just started working on a data
    purge strategy (PS4) using the Looped Purge script supplied by Oracle.
    Like others have written on the web, it doesn't delete everything so I put
    together a "manual purge" script that deletes the data and then does a
    SHRINK and DEALLOCATE UNUSED on various tables.

    My question is this: We have purged about 90% of the data in our
    tablespace and would now like to recover (shrink) the datafile(s) using:

    ALTER DATABASE DATAFILE '+DG1/tstsoa/datafile/dev_soainfra.310.767790105'
    RESIZE 5G;
    commit;

    But then receive this:

    ERROR:
    ORA-03297: file contains used data beyond requested RESIZE value


    Is there now a way to "coalesce" the data stored in the datafile so it is
    not fragmented? Or, is my only option to create a new tablespace and move
    all the Objects (tables and indexes) to the new - then rebuild (drop
    datafiles) and resize the Original tablespace and move the Objects back?


    Thanks,

    Anthony


    --
    http://www.freelists.org/webpage/oracle-l
    --
    http://www.freelists.org/webpage/oracle-l
  • Wayne Smith at Jan 18, 2012 at 2:14 pm
    I've not done this, but it will not work if your UIDs and GIDs for oracle,
    oinstall, etc. vary at all from system to system.
    Cheers, Wayne

    - Dunbar's Law: Software efficiency halves every 18 months thus
    compensating for Moore's Law. -- Norm Dunbar
    On Mon, Jan 16, 2012 at 3:02 PM, Sandra Becker wrote:

    Currently we are running 10g EE on SUSE10 on an IBM mainframe. Early
    February we will be getting a new mainframe. The storage for all
    databases will be unplugged from the old mainframe and plugged into
    the new one. So far no issues, I've done this before.

    The next step is to upgrade all databases to 11g. My lead SA informed
    today that he wants to use the same oracle binaries for all database
    VMs. I can see the benefits from his perspective but I don't think
    that should be the overriding concern here. He doesn't manage the
    oracle binaries, I do. Has anyone done the shared binaries between
    servers before? What are your thoughts?

    --
    Sandy
    Transzap, Inc.
    --
    http://www.freelists.org/webpage/oracle-l


    --
    http://www.freelists.org/webpage/oracle-l

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedJan 16, '12 at 8:03p
activeFeb 7, '12 at 10:13p
posts15
users11
websiteoracle.com

People

Translate

site design / logo © 2023 Grokbase