it does batch processing it produces archivelogs (when compounded) way
bigger than the database.. this is around 150GB worth of archivelogs.
The issue here is on the Disaster Recovery, if we turn on the log transfer
of Data Guard on the period of the batch run the network bandwidth capacity
gets consumed easily causing long queue for archive gap. This is the same if
we turn off the log transfer, then do the batch run, then re-enable it...
the gap resolution process will eat up the entire network bandwidth.
I have read on the doc *"Batch Processing in Disaster Recovery
Configurations - Best Practices for Oracle Data Guard"*
this is 11g, on 10g what I can do is SSH compression http://goo.gl/jc6n
but still even with compression enabled the redo requirement may still
exceed the network capacity
The RMAN incremental backups (differential & cumulative) does not help here
because it will generate and re-apply the same amount of redo
Another thing I'm exploring is just turn off the log transport before doing
the batch run, then after that.. duplicate the standby (not incremental
update but fresh)
Do you have other ideas around this? If you have encountered this how did
you address the issue? Is there a related PeopleSoft bug? This will be very