FAQ
We have this client with 30-50GB database on Peoplesoft.. and amazingly when
it does batch processing it produces archivelogs (when compounded) way
bigger than the database.. this is around 150GB worth of archivelogs.

The issue here is on the Disaster Recovery, if we turn on the log transfer
of Data Guard on the period of the batch run the network bandwidth capacity
gets consumed easily causing long queue for archive gap. This is the same if
we turn off the log transfer, then do the batch run, then re-enable it...
the gap resolution process will eat up the entire network bandwidth.

I have read on the doc *"Batch Processing in Disaster Recovery
Configurations - Best Practices for Oracle Data Guard"*
https://docs.google.com/viewer?url=http://www.hitachi.co.jp/Prod/comp/soft1/oracle/pdf/OBtecinfo-08-008.pdf
but
this is 11g, on 10g what I can do is SSH compression http://goo.gl/jc6n
but still even with compression enabled the redo requirement may still
exceed the network capacity

The RMAN incremental backups (differential & cumulative) does not help here
because it will generate and re-apply the same amount of redo

Another thing I'm exploring is just turn off the log transport before doing
the batch run, then after that.. duplicate the standby (not incremental
update but fresh)

Do you have other ideas around this? If you have encountered this how did
you address the issue? Is there a related PeopleSoft bug? This will be very
much appreciated.

--
Karl Arao
karlarao.wordpress.com
karlarao.tiddlyspot.com

--
http://www.freelists.org/webpage/oracle-l

Search Discussions

  • Goulet, Richard at Oct 20, 2010 at 12:48 pm
    Arao,


    Welcome to the world of PeopleSoft. The reason for the "excessive"
    redo generation has to do with what PeopleSoft calls a "temporary
    table". In fact these are real tables that get loaded, updated in some
    cases, unloaded, and repeat many times. In some cases the data in the
    temp table is needed for multiple sessions that run their data
    manipulation & report generator process (darn, can't for the life of me
    remember what that product's name is). Sometimes that is not true and
    you can replace it with a true Oracle temporary table to reduce your
    redo generation. Just don't expect any help from PeopleSoft on
    determining which is which. I did it by trial & error in a dev
    environment and then got the local developers and management to buy in,
    cut out half of the redo. BUT, WARNING: you are now moving into an area
    where PeopleSoft support can get dicey and upgrades a bit more of a
    pain, so document everything very well and create scripts to not only
    create what you need, but undo it as well.


    Now for the required disclaimer: My experience with PeopleSoft is
    about 4 years old, so support which was bad at best may have gotten
    better with Oracle's acquisition of the product (bad move IMHO). So
    move cautiously, with much coordination, and be ready to back changes
    out.


    Dick Goulet
    Senior Oracle DBA



    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Karl Arao
    Sent: Wednesday, October 20, 2010 1:50 AM
    To: oracle-l@freelists.org
    Subject: Peoplesoft Batch run and excessive redo generation

    We have this client with 30-50GB database on Peoplesoft.. and amazingly
    when it does batch processing it produces archivelogs (when compounded)
    way bigger than the database.. this is around 150GB worth of
    archivelogs.

    The issue here is on the Disaster Recovery, if we turn on the log
    transfer of Data Guard on the period of the batch run the network
    bandwidth capacity gets consumed easily causing long queue for archive
    gap. This is the same if we turn off the log transfer, then do the batch
    run, then re-enable it... the gap resolution process will eat up the
    entire network bandwidth.

    I have read on the doc "Batch Processing in Disaster Recovery
    Configurations - Best Practices for Oracle Data Guard"
    https://docs.google.com/viewer?url=http://www.hitachi.co.jp/Prod/comp/so
    ft1/oracle/pdf/OBtecinfo-08-008.pdf but this is 11g, on 10g what I can
    do is SSH compression http://goo.gl/jc6n
    but still even with compression enabled the redo requirement may still
    exceed the network capacity

    The RMAN incremental backups (differential & cumulative) does not help
    here because it will generate and re-apply the same amount of redo

    Another thing I'm exploring is just turn off the log transport before
    doing the batch run, then after that.. duplicate the standby (not
    incremental update but fresh)

    Do you have other ideas around this? If you have encountered this how
    did you address the issue? Is there a related PeopleSoft bug? This will
    be very much appreciated.

    --
    Karl Arao
    karlarao.wordpress.com
    karlarao.tiddlyspot.com

    --
    http://www.freelists.org/webpage/oracle-l

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedOct 20, '10 at 5:50a
activeOct 20, '10 at 12:48p
posts2
users2
websiteoracle.com

2 users in discussion

Goulet, Richard: 1 post Karl Arao: 1 post

People

Translate

site design / logo © 2022 Grokbase