FAQ
I was going to see what OCM has to offer and install it on one of our development servers. The install appears to be very easy. However, after entering the information prompted after executing $ORACLE_HOME/ccr/bin/ setupCCR, I get the following:

Registration Server encountered a SQL Exception
SQL Exception: ORA-01552: cannot use system rollback segment for non-system tablespace 'MGMT_TABLESPACE'
ORA-06512: at "SYSMAN.CCR_AUTH", line 2193
ORA-06512: at line 1

Since there is no tablespace with that name on the server I am attempt to configure OCM on, I can only assume this is being generated from Oracle support's repository? If so: way to be Oracle!

Anyone else have this problem? ...or has no one else installed OCM?

CONFIDENTIALITY NOTICE:

This message is intended for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by email reply.

Search Discussions

  • Wayne Smith at Aug 10, 2011 at 8:24 pm
    OCM is most useful to me because it enables "plans" and "health checks" on
    MOS. I use it on all production and most test/dev databases that will exist
    long enough to have their home updated (such as with a patchset or PSU or
    version upgrade).

    If all you've done is run setupCCR, then OCM isn't using your database
    ("collectconfig" is the thing that puts OCM stuff in your database). So I'm
    guessing the reference is to a database back at ccr.oracle.com central.
    Maybe time for an SR? (I haven't seen this error).

    The rest of this post is just my cookbook for OCM install ... sorry I can't
    be more helpful with this particular error :-(

    Make sure you are using (at least) version 10.3.5 of OCM. If your database
    and home have never had OCM installed, then I

    remove the $ORACLE_HOME/ccr directory, if it exists (other steps must
    be done before this if OCM has already been installed in one of your
    databases or this is a clone of a database that had OCM installed)
    download latest version of OCM zip file to the $ORACLE_HOME directory
    and explode.
    $ORACLE_HOME/ccr/bin/setupCCR -s [CSI MOSemail US]

    CSI is your CSI number; MOSemail is the email address you use on My
    Oracle Support; "US" is my country code

    If your server does not have access to ccr.oracle.com, enter NONE or
    read abount DISCONNECTED mode.
    If you have a proxy server that can give OCM access to
    ccr.oracle.com, enter its name and port when prompted.
    For each database, use ". oraenv" to setup environment variables, then
    setup the database for OCM with

    $ORACLE_HOME/ccr/admin/scripts/installCCRSQL.sh collectconfig;

    Do an initial collection of data for MOS with

    $ORACLE_HOME/ccr/bin/emCCR collect

    Set a daily collection time, if "now" is not good enough, with

    $ORACLE_HOME/ccr/bin/emCCR set collection_interval='FREQ=DAILY;
    BYHOUR=04; BYMINUTE=15'

    Occasionally view OCM status to ensure OCM hasn't updated itself into
    a bad state

    $ORACLE_HOME/ccr/bin/emCCR status

    (and let us know what the problem was, please!)

    Cheers, Wayne

    There is no job so simple that it cannot be done wrong. (Perrussel's Law)
    On Fri, Aug 5, 2011 at 2:25 PM, Stephens, Chris wrote:

    I was going to see what OCM has to offer and install it on one of our
    development servers. The install appears to be very easy. However, after
    entering the information prompted after executing $ORACLE_HOME/ccr/bin/
    setupCCR, I get the following:****

    ** **

    Registration Server encountered a SQL Exception****

    SQL Exception: ORA-01552: cannot use system rollback segment for non-system
    tablespace 'MGMT_TABLESPACE' ****

    ORA-06512: at "SYSMAN.CCR_AUTH", line 2193 ****

    ORA-06512: at line 1 ****

    ** **

    Since there is no tablespace with that name on the server I am attempt to
    configure OCM on, I can only assume this is being generated from Oracle
    support�s repository? If so: way to be Oracle!****

    ** **

    Anyone else have this problem? �or has no one else installed OCM?****
    --
    http://www.freelists.org/webpage/oracle-l
  • David Roberts at Aug 10, 2011 at 9:34 pm
    The only feedback I've heard regarding OCM is that the patches recommended
    were completely inappropriate.

    My strongest recommendation, is that if you do install it, you verify all
    the recommendations with MOS before applying.

    Dave
    On Wed, Aug 10, 2011 at 9:24 PM, Wayne Smith wrote:

    OCM is most useful to me because it enables "plans" and "health checks" on
    MOS. I use it on all production and most test/dev databases that will exist
    long enough to have their home updated (such as with a patchset or PSU or
    version upgrade).

    If all you've done is run setupCCR, then OCM isn't using your database
    ("collectconfig" is the thing that puts OCM stuff in your database). So I'm
    guessing the reference is to a database back at ccr.oracle.com central.
    Maybe time for an SR? (I haven't seen this error).

    The rest of this post is just my cookbook for OCM install ... sorry I can't
    be more helpful with this particular error :-(

    Make sure you are using (at least) version 10.3.5 of OCM. If your database
    and home have never had OCM installed, then I

    1. remove the $ORACLE_HOME/ccr directory, if it exists (other steps
    must be done before this if OCM has already been installed in one of your
    databases or this is a clone of a database that had OCM installed)
    2. download latest version of OCM zip file to the $ORACLE_HOME
    directory and explode.
    3. $ORACLE_HOME/ccr/bin/setupCCR -s [CSI MOSemail US]
    - CSI is your CSI number; MOSemail is the email address you use on
    My Oracle Support; "US" is my country code
    1. If your server does not have access to ccr.oracle.com, enter NONE
    or read abount DISCONNECTED mode.
    2. If you have a proxy server that can give OCM access to
    ccr.oracle.com, enter its name and port when prompted.
    4. For each database, use ". oraenv" to setup environment variables,
    then setup the database for OCM with
    - $ORACLE_HOME/ccr/admin/scripts/installCCRSQL.sh collectconfig;
    5. Do an initial collection of data for MOS with
    - $ORACLE_HOME/ccr/bin/emCCR collect
    6. Set a daily collection time, if "now" is not good enough, with
    - $ORACLE_HOME/ccr/bin/emCCR set collection_interval='FREQ=DAILY;
    BYHOUR=04; BYMINUTE=15'
    7. Occasionally view OCM status to ensure OCM hasn't updated itself
    into a bad state
    - $ORACLE_HOME/ccr/bin/emCCR status

    (and let us know what the problem was, please!)

    Cheers, Wayne

    There is no job so simple that it cannot be done wrong. (Perrussel's Law)
    On Fri, Aug 5, 2011 at 2:25 PM, Stephens, Chris wrote:

    I was going to see what OCM has to offer and install it on one of our
    development servers. The install appears to be very easy. However, after
    entering the information prompted after executing $ORACLE_HOME/ccr/bin/
    setupCCR, I get the following:****

    ** **

    Registration Server encountered a SQL Exception****

    SQL Exception: ORA-01552: cannot use system rollback segment for
    non-system tablespace 'MGMT_TABLESPACE' ****

    ORA-06512: at "SYSMAN.CCR_AUTH", line 2193 ****

    ORA-06512: at line 1 ****

    ** **

    Since there is no tablespace with that name on the server I am attempt to
    configure OCM on, I can only assume this is being generated from Oracle
    support�s repository? If so: way to be Oracle!****

    ** **

    Anyone else have this problem? �or has no one else installed OCM?****
    --
    http://www.freelists.org/webpage/oracle-l
  • Grabowy, Chris at Aug 10, 2011 at 9:44 pm
    We have been struggling with having Datapump exports complete every night. Sometimes with execution times. Other cases with dump file sizes.

    We added the compress option and that helped with generating smaller dump files with no obvious impact to the execution time, but that hasn't really helped with the overall execution time.

    We have played with the parallel option with some success.

    Anyway one of the DBAs updated the script to generate a datapump parm file that contains a list of the tables that changed in the last two days. Even though we execute datapump every night he felt it would be safer to generate a list of changed tables for the last two days. He generates the list of changed tables by querying dba_tab_modifications. With this change, the datapump exports are obviously faster and the dump files much smaller.

    We still do a weekly full datapump export on Sundays.

    Anyway, were kind of scratching our heads and trying to figure out if this could come back and bite us somehow. Paranoia is a required DBA trait...

    We do understand that when importing we might have to go back to the Sunday save to import a table since it might have not been saved in any of the incrementals. We are saving/organizing the log files to easily grep for the desired table.

    I know that incremental was not a stable/valid option in the old export tool, but you would think that Oracle would have figured out how to do incremental datapump exports by now...using some sort of defined criteria. At least, looking thru the doc I did not find such an option. Am I just being naïve here and missing the bigger picture on the viability of a datapump/export incremental feature?

    Thoughts? Suggestions? Insults?
  • Japplewhite_at_austinisd.org at Aug 10, 2011 at 9:56 pm
    Chris,

    I don't know of a manageable way to accomplish what you want with your
    main DB, though I'm certainly no DataPump guru. You might consider
    setting up a Physical Standby for the DB in question, script the opening
    of it in Read-Only mode for overnight backups, doing a guaranteed
    consistent export without the time constraints, then putting it back into
    Managed Recovery mode to catch back up with the Primary during the day.
    That would take the main DB out of the picture for you. A Logical Standby
    would work, too, but they're much more of a maintenance burden, IMHO,
    dealing with both.

    Jack C. Applewhite - Database Administrator
    Austin I.S.D. - MIS Department
    512.414.9250 (wk) / 512.935.5929 (pager)

    From: "Grabowy, Chris"
    To: "oracle-l@freelists.org"
    Date: 08/10/2011 04:46 PM

    Subject: Datapump export, incremential only?
    Sent by: oracle-l-bounce_at_freelists.org

    We have been struggling with having Datapump exports complete every
    night. Sometimes with execution times. Other cases with dump file sizes.


    We added the compress option and that helped with generating smaller dump
    files with no obvious impact to the execution time, but that hasn’t really
    helped with the overall execution time.


    We have played with the parallel option with some success.


    Anyway one of the DBAs updated the script to generate a datapump parm file
    that contains a list of the tables that changed in the last two days.
    Even though we execute datapump every night he felt it would be safer to
    generate a list of changed tables for the last two days. He generates
    the list of changed tables by querying dba_tab_modifications. With this
    change, the datapump exports are obviously faster and the dump files much
    smaller.


    We still do a weekly full datapump export on Sundays.


    Anyway, were kind of scratching our heads and trying to figure out if this
    could come back and bite us somehow. Paranoia is a required DBA trait…


    We do understand that when importing we might have to go back to the
    Sunday save to import a table since it might have not been saved in any of
    the incrementals. We are saving/organizing the log files to easily grep
    for the desired table.


    I know that incremental was not a stable/valid option in the old export
    tool, but you would think that Oracle would have figured out how to do
    incremental datapump exports by now…using some sort of defined criteria.
    At least, looking thru the doc I did not find such an option. Am I just
    being naïve here and missing the bigger picture on the viability of a
    datapump/export incremental feature?


    Thoughts? Suggestions? Insults?
  • Gus Spier at Aug 10, 2011 at 11:06 pm
    Chris, is there some way to use the datapump WHERE clause to set your
    defined criteria for an incremental datapump export? Of course, if the
    tables don't have a TIMESTAMP compatible column, that might be problematic.

    I suspect that big Oracle advocates RMAN solutions over datapump incremental
    backups. That might account for the absence of your datapump solution.

    Regards,
    Gus
    On Wed, Aug 10, 2011 at 5:44 PM, Grabowy, Chris wrote:

    We have been struggling with having Datapump exports complete every night.
    Sometimes with execution times. Other cases with dump file sizes.****

    ** **

    We added the compress option and that helped with generating smaller dump
    files with no obvious impact to the execution time, but that hasn’t really
    helped with the overall execution time. ****

    ** **

    We have played with the parallel option with some success.****

    ** **

    Anyway one of the DBAs updated the script to generate a datapump parm file
    that contains a list of the tables that changed in the last two days. Even
    though we execute datapump every night he felt it would be safer to generate
    a list of changed tables for the last two days. He generates the list of
    changed tables by querying dba_tab_modifications. With this change, the
    datapump exports are obviously faster and the dump files much smaller.****

    ** **

    We still do a weekly full datapump export on Sundays.****

    ** **

    Anyway, were kind of scratching our heads and trying to figure out if this
    could come back and bite us somehow. Paranoia is a required DBA trait…***
    *

    ** **

    We do understand that when importing we might have to go back to the Sunday
    save to import a table since it might have not been saved in any of the
    incrementals. We are saving/organizing the log files to easily grep for the
    desired table.****

    ** **

    I know that incremental was not a stable/valid option in the old export
    tool, but you would think that Oracle would have figured out how to do
    incremental datapump exports by now…using some sort of defined criteria. At
    least, looking thru the doc I did not find such an option. Am I just being
    naïve here and missing the bigger picture on the viability of a
    datapump/export incremental feature?****

    ** **

    Thoughts? Suggestions? Insults?****

    ** **
    --
    http://www.freelists.org/webpage/oracle-l
  • Guillermo Alan Bort at Aug 10, 2011 at 11:40 pm
    I have only one question: do you expect this dapump dumps to be your
    database backup? if so, please think again.

    If you are using DP Exports to have "snapshots" of the database, might I
    suggest taking a look at flashback?

    Also, if you are actually worried that this will come back and bite you, why
    not attempting to recover the database from this "backup" (and I use the
    term loosely), that way you can have a documentation of what will be
    required and you won't have to struggle with parameters and stuff during an
    actual recovery situation. We call this a DRE (Disaster Recovery Excercise).

    hth
    Alan.-
    On Wed, Aug 10, 2011 at 8:06 PM, Gus Spier wrote:

    Chris, is there some way to use the datapump WHERE clause to set your
    defined criteria for an incremental datapump export? Of course, if the
    tables don't have a TIMESTAMP compatible column, that might be problematic.

    I suspect that big Oracle advocates RMAN solutions over datapump
    incremental backups. That might account for the absence of your datapump
    solution.

    Regards,
    Gus

    On Wed, Aug 10, 2011 at 5:44 PM, Grabowy, Chris wrote:

    We have been struggling with having Datapump exports complete every
    night. Sometimes with execution times. Other cases with dump file sizes.
    ****

    ** **

    We added the compress option and that helped with generating smaller dump
    files with no obvious impact to the execution time, but that hasn’t really
    helped with the overall execution time. ****

    ** **

    We have played with the parallel option with some success.****

    ** **

    Anyway one of the DBAs updated the script to generate a datapump parm file
    that contains a list of the tables that changed in the last two days. Even
    though we execute datapump every night he felt it would be safer to generate
    a list of changed tables for the last two days. He generates the list of
    changed tables by querying dba_tab_modifications. With this change, the
    datapump exports are obviously faster and the dump files much smaller.***
    *

    ** **

    We still do a weekly full datapump export on Sundays.****

    ** **

    Anyway, were kind of scratching our heads and trying to figure out if this
    could come back and bite us somehow. Paranoia is a required DBA trait…**
    **

    ** **

    We do understand that when importing we might have to go back to the
    Sunday save to import a table since it might have not been saved in any of
    the incrementals. We are saving/organizing the log files to easily grep for
    the desired table.****

    ** **

    I know that incremental was not a stable/valid option in the old export
    tool, but you would think that Oracle would have figured out how to do
    incremental datapump exports by now…using some sort of defined criteria. At
    least, looking thru the doc I did not find such an option. Am I just being
    naïve here and missing the bigger picture on the viability of a
    datapump/export incremental feature?****

    ** **

    Thoughts? Suggestions? Insults?****

    ** **
    --
    http://www.freelists.org/webpage/oracle-l
  • Grabowy, Chris at Aug 16, 2011 at 12:37 am
    Gus,

    I believe the DBA was trying to take advantage of that WHERE clause but was not able to make it work that way.

    As I mentioned in the other email, we do the standard RMAN backups. Since we are using ASM, we will always do RMAN backups.

    Having a daily DP export has some nice benefits. We are simply trying to make it more efficient.

    Thanks,
    Chris

    From: Gus Spier
    Sent: Wednesday, August 10, 2011 7:07 PM
    To: Grabowy, Chris
    Cc: oracle-l@freelists.org
    Subject: EXTERNAL: Re: Datapump export, incremential only?

    Chris, is there some way to use the datapump WHERE clause to set your defined criteria for an incremental datapump export? Of course, if the tables don't have a TIMESTAMP compatible column, that might be problematic.

    I suspect that big Oracle advocates RMAN solutions over datapump incremental backups. That might account for the absence of your datapump solution.

    Regards,
    Gus
    On Wed, Aug 10, 2011 at 5:44 PM, Grabowy, Chris > wrote:
    We have been struggling with having Datapump exports complete every night. Sometimes with execution times. Other cases with dump file sizes.

    We added the compress option and that helped with generating smaller dump files with no obvious impact to the execution time, but that hasn't really helped with the overall execution time.

    We have played with the parallel option with some success.

    Anyway one of the DBAs updated the script to generate a datapump parm file that contains a list of the tables that changed in the last two days. Even though we execute datapump every night he felt it would be safer to generate a list of changed tables for the last two days. He generates the list of changed tables by querying dba_tab_modifications. With this change, the datapump exports are obviously faster and the dump files much smaller.

    We still do a weekly full datapump export on Sundays.

    Anyway, were kind of scratching our heads and trying to figure out if this could come back and bite us somehow. Paranoia is a required DBA trait...

    We do understand that when importing we might have to go back to the Sunday save to import a table since it might have not been saved in any of the incrementals. We are saving/organizing the log files to easily grep for the desired table.

    I know that incremental was not a stable/valid option in the old export tool, but you would think that Oracle would have figured out how to do incremental datapump exports by now...using some sort of defined criteria. At least, looking thru the doc I did not find such an option. Am I just being naïve here and missing the bigger picture on the viability of a datapump/export incremental feature?

    Thoughts? Suggestions? Insults?
  • Rjamya at Aug 16, 2011 at 5:39 pm
    Interesting ... could/would you explain what problems were encountered? i am
    curious ... i have had success with it, but in my case list of tables
    against which the expdp was run was pretty short (< 10).

    Raj
    On Mon, Aug 15, 2011 at 8:37 PM, Grabowy, Chris wrote:

    Gus,****

    ** **

    I believe the DBA was trying to take advantage of that WHERE clause but was
    not able to make it work that way.****

    ** **
    --
    http://www.freelists.org/webpage/oracle-l
  • Grabowy, Chris at Aug 16, 2011 at 6:50 pm
    I believe the DBA was trying to take advantage of that WHERE clause but was not able to make it work that way.
    Trying to use the WHERE clause to also query DBA_TAB_MODIFICIATIONS to retrieve a list of tables that have been updated recently. Simulating an incremental datapump export.

    You where able to do that?

    From: oracle-l-bounce_at_freelists.org On Behalf Of rjamya
    Sent: Tuesday, August 16, 2011 1:39 PM
    To: Grabowy, Chris
    Cc: oracle-l@freelists.org
    Subject: Re: EXTERNAL: Re: Datapump export, incremential only?

    Interesting ... could/would you explain what problems were encountered? i am curious ... i have had success with it, but in my case list of tables against which the expdp was run was pretty short (< 10).

    Raj
    On Mon, Aug 15, 2011 at 8:37 PM, Grabowy, Chris wrote:
    Gus,

    I believe the DBA was trying to take advantage of that WHERE clause but was not able to make it work that way.
  • Rjamya at Aug 16, 2011 at 10:20 pm
    No, just pure where clause applicable to a bunch of tables ... not via
    dba_tab_modifications ... however i see this could get difficult via expdp
    interface. However it might be possible to do this via dbms_datapump since
    it provides much more flexibility.

    Raj
    On Tue, Aug 16, 2011 at 2:50 PM, Grabowy, Chris wrote:

    I believe the DBA was trying to take advantage of that WHERE clause but
    was not able to make it work that way.

    Trying to use the WHERE clause to also query DBA_TAB_MODIFICIATIONS to
    retrieve a list of tables that have been updated recently. Simulating an
    incremental datapump export.

    You where able to do that?


    From: oracle-l-bounce_at_freelists.org
    On Behalf Of rjamya
    Sent: Tuesday, August 16, 2011 1:39 PM
    To: Grabowy, Chris
    Cc: oracle-l@freelists.org
    Subject: Re: EXTERNAL: Re: Datapump export, incremential only?

    Interesting ... could/would you explain what problems were encountered? i
    am curious ... i have had success with it, but in my case list of tables
    against which the expdp was run was pretty short (< 10).

    Raj
    On Mon, Aug 15, 2011 at 8:37 PM, Grabowy, Chris
    wrote:
    Gus,

    I believe the DBA was trying to take advantage of that WHERE clause but was
    not able to make it work that way.
    --
    -----
    Best regards
    Raj

    --
    http://www.freelists.org/webpage/oracle-l
  • Herring Dave - dherri at Aug 11, 2011 at 1:45 pm
    Chris,

    If you're going to rely on the view DBA_TAB_MODIFICATIONS, make sure to execute DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO beforehand, so that this view is up-to-date.

    DAVID HERRING

    DBA

    Acxiom Corporation

    EML   dave.herring_at_acxiom.com
    TEL    630.944.4762
    MBL   630.430.5988

    1501 Opus Pl, Downers Grove, IL 60515, USA
    WWW.ACXIOM.COM

    The information contained in this communication is confidential, is intended only for the use of the recipient named above, and may be legally privileged. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please resend this communication to the sender and delete the original message or any copy of it from your computer system. Thank you.

    From: oracle-l-bounce_at_freelists.org On Behalf Of Grabowy, Chris
    Sent: Wednesday, August 10, 2011 4:44 PM
    To: oracle-l@freelists.org
    Subject: Datapump export, incremential only?

    We have been struggling with having Datapump exports complete every night.  Sometimes with execution times.  Other cases with dump file sizes.

    We added the compress option and that helped with generating smaller dump files with no obvious impact to the execution time, but that hasn't really helped with the overall execution time.

    We have played with the parallel option with some success.

    Anyway one of the DBAs updated the script to generate a datapump parm file that contains a list of the tables that changed in the last two days.  Even though we execute datapump every night he felt it would be safer to generate a list of changed tables for the last two days.   He generates the list of changed tables by querying dba_tab_modifications.  With this change, the datapump exports are obviously faster and the dump files much smaller.

    We still do a weekly full datapump export on Sundays.

    Anyway, were kind of scratching our heads and trying to figure out if this could come back and bite us somehow.  Paranoia is a required DBA trait.

    We do understand that when importing we might have to go back to the Sunday save to import a table since it might have not been saved in any of the incrementals.  We are saving/organizing the log files to easily grep for the desired table.

    I know that incremental was not a stable/valid option in the old export tool, but you would think that Oracle would have figured out how to do incremental datapump exports by now.using some sort of defined criteria.  At least, looking thru the doc I did not find such an option.   Am I just being naïve here and missing the bigger picture on the viability of a datapump/export incremental feature?

    Thoughts?  Suggestions?  Insults?
  • Rjamya at Aug 11, 2011 at 3:45 pm
    I am fairly certain you are not really looking this as a substitute for
    backup. So what is the real purpose of these incremental exports?
    irrespective of the real purpose, you could fire off one job per table to
    make it even more _parallel_ in addition to parallel option for large
    tables. and this can be easily scripted.

    another option ... If you want pure data only (i.e. no metadata), you could
    create external tables (in datapump mode) which will allow you to create
    datafiles that contain specified data. This will allow you to fine tune data
    that you can export on a per table basis as well.

    How big is the schema that you are trying to export (expdp) ? are you
    constrained by disk space that you cant export daily?

    Raj
  • Grabowy, Chris at Aug 16, 2011 at 12:40 am
    Thanks Dave.

    I will ask him to check into that.

    Thanks,
    Chris

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org On Behalf Of Herring Dave - dherri
    Sent: Thursday, August 11, 2011 9:46 AM
    To: Grabowy, Chris; oracle-l@freelists.org
    Subject: EXTERNAL: RE: Datapump export, incremential only?

    Chris,

    If you're going to rely on the view DBA_TAB_MODIFICATIONS, make sure to execute DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO beforehand, so that this view is up-to-date.

    DAVID HERRING

    DBA

    Acxiom Corporation

    EML   dave.herring_at_acxiom.com
    TEL    630.944.4762
    MBL   630.430.5988

    1501 Opus Pl, Downers Grove, IL 60515, USA
    WWW.ACXIOM.COM

    The information contained in this communication is confidential, is intended only for the use of the recipient named above, and may be legally privileged. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please resend this communication to the sender and delete the original message or any copy of it from your computer system. Thank you.

    From: oracle-l-bounce_at_freelists.org On Behalf Of Grabowy, Chris
    Sent: Wednesday, August 10, 2011 4:44 PM
    To: oracle-l@freelists.org
    Subject: Datapump export, incremential only?

    We have been struggling with having Datapump exports complete every night.  Sometimes with execution times.  Other cases with dump file sizes.

    We added the compress option and that helped with generating smaller dump files with no obvious impact to the execution time, but that hasn't really helped with the overall execution time.

    We have played with the parallel option with some success.

    Anyway one of the DBAs updated the script to generate a datapump parm file that contains a list of the tables that changed in the last two days.  Even though we execute datapump every night he felt it would be safer to generate a list of changed tables for the last two days.   He generates the list of changed tables by querying dba_tab_modifications.  With this change, the datapump exports are obviously faster and the dump files much smaller.

    We still do a weekly full datapump export on Sundays.

    Anyway, were kind of scratching our heads and trying to figure out if this could come back and bite us somehow.  Paranoia is a required DBA trait.

    We do understand that when importing we might have to go back to the Sunday save to import a table since it might have not been saved in any of the incrementals.  We are saving/organizing the log files to easily grep for the desired table.

    I know that incremental was not a stable/valid option in the old export tool, but you would think that Oracle would have figured out how to do incremental datapump exports by now.using some sort of defined criteria.  At least, looking thru the doc I did not find such an option.   Am I just being naïve here and missing the bigger picture on the viability of a datapump/export incremental feature?

    Thoughts?  Suggestions?  Insults?

    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Stephens, Chris at Aug 12, 2011 at 1:12 pm
    Thanks for the summarized list of steps.

    I finally got around to attempting this again and it appears that Oracle has resolved the issue on their end. The setupCCR script completed without error. On to the database specific piece!

    Thanks again.
    Chris

    From: Wayne Smith
    Sent: Wednesday, August 10, 2011 3:24 PM
    To: Stephens, Chris
    Cc: oracle-l@freelists.org
    Subject: Re: Oracle Configuration Manager installation problems

    OCM is most useful to me because it enables "plans" and "health checks" on MOS. I use it on all production and most test/dev databases that will exist long enough to have their home updated (such as with a patchset or PSU or version upgrade).

    If all you've done is run setupCCR, then OCM isn't using your database ("collectconfig" is the thing that puts OCM stuff in your database). So I'm guessing the reference is to a database back at ccr.oracle.com<http://ccr.oracle.com> central. Maybe time for an SR? (I haven't seen this error).
    The rest of this post is just my cookbook for OCM install ... sorry I can't be more helpful with this particular error :-(

    Make sure you are using (at least) version 10.3.5 of OCM. If your database and home have never had OCM installed, then I

    remove the $ORACLE_HOME/ccr directory, if it exists (other steps must be done before this if OCM has already been installed in one of your databases or this is a clone of a database that had OCM installed)
    download latest version of OCM zip file to the $ORACLE_HOME directory and explode.
    $ORACLE_HOME/ccr/bin/setupCCR -s [CSI MOSemail US]

    CSI is your CSI number; MOSemail is the email address you use on My Oracle Support; "US" is my country code

    If your server does not have access to ccr.oracle.com<http://ccr.oracle.com>, enter NONE or read abount DISCONNECTED mode.
    If you have a proxy server that can give OCM access to ccr.oracle.com<http://ccr.oracle.com>, enter its name and port when prompted.

    For each database, use ". oraenv" to setup environment variables, then setup the database for OCM with

    $ORACLE_HOME/ccr/admin/scripts/installCCRSQL.sh collectconfig;

    Do an initial collection of data for MOS with

    $ORACLE_HOME/ccr/bin/emCCR collect

    Set a daily collection time, if "now" is not good enough, with

    $ORACLE_HOME/ccr/bin/emCCR set collection_interval='FREQ=DAILY; BYHOUR=04; BYMINUTE=15'

    Occasionally view OCM status to ensure OCM hasn't updated itself into a bad state

    $ORACLE_HOME/ccr/bin/emCCR status
    (and let us know what the problem was, please!)

    Cheers, Wayne
    There is no job so simple that it cannot be done wrong. (Perrussel's Law)

    On Fri, Aug 5, 2011 at 2:25 PM, Stephens, Chris > wrote:
    I was going to see what OCM has to offer and install it on one of our development servers. The install appears to be very easy. However, after entering the information prompted after executing $ORACLE_HOME/ccr/bin/ setupCCR, I get the following:

    Registration Server encountered a SQL Exception
    SQL Exception: ORA-01552: cannot use system rollback segment for non-system tablespace 'MGMT_TABLESPACE'
    ORA-06512: at "SYSMAN.CCR_AUTH", line 2193
    ORA-06512: at line 1

    Since there is no tablespace with that name on the server I am attempt to configure OCM on, I can only assume this is being generated from Oracle support's repository? If so: way to be Oracle!

    Anyone else have this problem? ...or has no one else installed OCM?

    CONFIDENTIALITY NOTICE:

    This message is intended for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by email reply.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedAug 5, '11 at 6:25p
activeAug 16, '11 at 10:20p
posts15
users9
websiteoracle.com

People

Translate

site design / logo © 2022 Grokbase