FAQ
My understanding of supplemental logging is that if we want Log Miner to see a complete record for a row change (updating a single columns out of 200 for example) then we have to create a log group for all columns (which makes perfect sense), and that this may cause a significant increase in the rate of redo log generation.

i) Has anyone come up with a way of estimating the increase in redo log generaton that this might cause? I'm wondering if it is possible, for example, to use log miner without supplemental logging, or with just the bare minimum, to see what volume of redo is associated with a particular changes and then estimate how this would scale up based on average row length.

ii) On the other hand, I could use monitoring to see the number of inserts and updates on a table in a day and multiply by the average row length to get a rough idea of the raw volume of data that would have to be logged, then apply some factor to scale up to the associated redo log size .. I have no idea what the factor might be though.

iii) Has anyone been through this on Siebel and worked out what the volume increase actually was?

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedApr 29, '08 at 6:54p
activeApr 29, '08 at 6:54p
posts1
users1
websiteoracle.com

1 user in discussion

David Aldridge: 1 post

People

Translate

site design / logo © 2022 Grokbase