FAQ
Hi folks,

I'm about to inherit an interesting project - a group of five 9.2.0.6 databases that produce approximately 2 terabytes (!) of archived redo log files per day.

Apparently the vendor configured the HP ServiceGuard clusters in such a way that it takes over an hour to shut down all of the packages in order to shut down the database. This amount of downtime supposedly can't be supported, so they decided to go with online backups and no downtime.

Does anyone out there have any suggestions on handling 400 gig of archived redo log files per day? I was thinking of either a neear-continuous RMAN job or shell cron that would write the logs to either tape or a storage server. Actually, I think that our tape library might be overwhelmed also by the constant write activity. My thinking right now is a storage server and utilizing a dedicated fast network connection to push the logs over. Storage though might be an issue.

If anyone has any thoughts or suggestions, they would be appreciated.
BTW, I already had the bright idea of NOARCHIVELOG mode and cold backups. :)

Thanks,
Lou Avrami

Search Discussions

  • Mercadante, Thomas F \(LABOR\) at May 3, 2007 at 11:47 am
    Lou,

    Although this is a challenge, this problem is really no different than
    any other database in production. It's just a matter of scale.

    You need enough free disk space to hold, let's say, two days worth of
    archivelog files. And you also need a fast enough tape backup system so
    that you can run backups, say, every two hours to keep the archivelog
    files moving off of the system.

    That's the theory. Enough free disk in case you have problems with
    backups and scheduled backups to keep the disk clear.

    That's what I would do.

    Tom

    This transmission may contain confidential, proprietary, or privileged information which is intended solely for use by the individual or entity to whom it is addressed. If you are not the intended recipient, you are hereby notified that any disclosure, dissemination, copying or distribution of this transmission or its attachments is strictly prohibited. In addition, unauthorized access to this transmission may violate federal or State law, including the Electronic Communications Privacy Act of 1985. If you have received this transmission in error, please notify the sender immediately by return e-mail and delete the transmission and its attachments.

    -----Original Message-----

    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Lou Avrami
    Sent: Thursday, May 03, 2007 1:09 AM
    To: oracle-l_at_freelists.org
    Subject: Terabytes of archived redo logs

    Hi folks,

    I'm about to inherit an interesting project - a group of five 9.2.0.6
    databases that produce approximately 2 terabytes (!) of archived redo
    log files per day.

    Apparently the vendor configured the HP ServiceGuard clusters in such a
    way that it takes over an hour to shut down all of the packages in order
    to shut down the database. This amount of downtime supposedly can't be
    supported, so they decided to go with online backups and no downtime.

    Does anyone out there have any suggestions on handling 400 gig of
    archived redo log files per day? I was thinking of either a
    neear-continuous RMAN job or shell cron that would write the logs to
    either tape or a storage server. Actually, I think that our tape
    library might be overwhelmed also by the constant write activity. My
    thinking right now is a storage server and utilizing a dedicated fast
    network connection to push the logs over. Storage though might be an
    issue.

    If anyone has any thoughts or suggestions, they would be appreciated.
    BTW, I already had the bright idea of NOARCHIVELOG mode and cold
    backups. :)

    Thanks,
    Lou Avrami

    --
    http://www.freelists.org/webpage/oracle-l
    --
    http://www.freelists.org/webpage/oracle-l
  • JApplewhite_at_austinisd.org at May 3, 2007 at 2:11 pm
    Here's a, perhaps, wild, thought. Could you establish Physical Standby
    databases on (an)other server(s)? Then you could let your Prod datbases
    automatically shovel the archived redo logs to them, periodically remove
    them from the Prod environment as you see they've been transferred to the
    Standbys. You could also gzip them on the Standby side to further save
    space. Gzip is such a CPU hog that I'd not want it running on the Prod
    server.

    You'd also get disaster recovery databases in the process. Just a
    thought.

    Jack C. Applewhite - Database Administrator
    Austin (Texas) Independent School District
    512.414.9715 (wk) / 512.935.5929 (pager)

    Same-Day Stump Grinding! Senior Discounts!

    Mike's Tree Service

    "Mercadante, Thomas F \(LABOR\)"
    Sent by: oracle-l-bounce_at_freelists.org
    05/03/2007 07:36 AM
    Please respond to
    Thomas.Mercadante_at_labor.state.ny.us

    To,
    cc

    Subject
    RE: Terabytes of archived redo logs

    Lou,

    Although this is a challenge, this problem is really no different than
    any other database in production. It's just a matter of scale.

    You need enough free disk space to hold, let's say, two days worth of
    archivelog files. And you also need a fast enough tape backup system so
    that you can run backups, say, every two hours to keep the archivelog
    files moving off of the system.

    That's the theory. Enough free disk in case you have problems with
    backups and scheduled backups to keep the disk clear.

    That's what I would do.

    Tom

    -----Original Message-----

    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Lou Avrami
    Sent: Thursday, May 03, 2007 1:09 AM
    To: oracle-l_at_freelists.org
    Subject: Terabytes of archived redo logs

    Hi folks,

    I'm about to inherit an interesting project - a group of five 9.2.0.6
    databases that produce approximately 2 terabytes (!) of archived redo
    log files per day.

    Apparently the vendor configured the HP ServiceGuard clusters in such a
    way that it takes over an hour to shut down all of the packages in order
    to shut down the database. This amount of downtime supposedly can't be
    supported, so they decided to go with online backups and no downtime.

    Does anyone out there have any suggestions on handling 400 gig of
    archived redo log files per day? I was thinking of either a
    neear-continuous RMAN job or shell cron that would write the logs to
    either tape or a storage server. Actually, I think that our tape
    library might be overwhelmed also by the constant write activity. My
    thinking right now is a storage server and utilizing a dedicated fast
    network connection to push the logs over. Storage though might be an
    issue.

    If anyone has any thoughts or suggestions, they would be appreciated.
    BTW, I already had the bright idea of NOARCHIVELOG mode and cold
    backups. :)

    Thanks,
    Lou Avrami

    --
    http://www.freelists.org/webpage/oracle-l
  • Mercadante, Thomas F \(LABOR\) at May 3, 2007 at 2:22 pm
    How does Rman fit into this picture? Are you saying he should run a
    Physical Stand-by system just to fix a backup issue? It's not a
    horrible idea. But I'm wondering what the cost-benefit-ratio would be.
    Acquiring a completely new server and stocking it with enough disk to
    hold the database and the archivelog files?



    Doesn't sound like a reasonable financial solution to me.



    This transmission may contain confidential, proprietary, or privileged information which is intended solely for use by the individual or entity to whom it is addressed. If you are not the intended recipient, you are hereby notified that any disclosure, dissemination, copying or distribution of this transmission or its attachments is strictly prohibited. In addition, unauthorized access to this transmission may violate federal or State law, including the Electronic Communications Privacy Act of 1985. If you have received this transmission in error, please notify the sender immediately by return e-mail and delete the transmission and its attachments.

    From: JApplewhite_at_austinisd.org
    Sent: Thursday, May 03, 2007 10:11 AM
    To: Mercadante, Thomas F (LABOR)
    Cc: avramil_at_concentric.net; oracle-l_at_freelists.org;
    oracle-l-bounce_at_freelists.org
    Subject: RE: Terabytes of archived redo logs



    Here's a, perhaps, wild, thought. Could you establish Physical Standby
    databases on (an)other server(s)? Then you could let your Prod datbases
    automatically shovel the archived redo logs to them, periodically remove
    them from the Prod environment as you see they've been transferred to
    the Standbys. You could also gzip them on the Standby side to further
    save space. Gzip is such a CPU hog that I'd not want it running on the
    Prod server.

    You'd also get disaster recovery databases in the process. Just a
    thought.

    Jack C. Applewhite - Database Administrator
    Austin (Texas) Independent School District
    512.414.9715 (wk) / 512.935.5929 (pager)

    Same-Day Stump Grinding! Senior Discounts!

    Mike's Tree Service
  • JApplewhite_at_austinisd.org at May 3, 2007 at 6:01 pm
    You mean everyone doesn't have spare servers sitting around waiting for
    DBAs to make good use of them? I'm shocked!

    Seriously. that's why I asked if it were possible - meaning financially
    and logistically feasible. If it's not, then it's not. However, if
    possible, having a Standby for each Prod DB is not a horrible idea - as
    long as it's in another location.

    I'd assume you'd use RMan to create the Standbys and continue to back up
    the Prod DBs. Guess there could be a potential recovery problem if you
    removed archived redo logs before RMan backed them up. However, that
    would be but one of the many challenges you'd face implementing any
    solution to this challenging problem.

    Jack C. Applewhite - Database Administrator
    Austin (Texas) Independent School District
    512.414.9715 (wk) / 512.935.5929 (pager)

    Same-Day Stump Grinding! Senior Discounts!

    Mike's Tree Service

    "Mercadante, Thomas F \(LABOR\)"
    05/03/2007 09:22 AM

    To

    cc,,

    Subject
    RE: Terabytes of archived redo logs

    How does Rman fit into this picture? Are you saying he should run a
    Physical Stand-by system just to fix a backup issue? It’s not a horrible
    idea. But I’m wondering what the cost-benefit-ratio would be. Acquiring
    a completely new server and stocking it with enough disk to hold the
    database and the archivelog files?


    Doesn’t sound like a reasonable financial solution to me.


    From: JApplewhite_at_austinisd.org
    Sent: Thursday, May 03, 2007 10:11 AM
    To: Mercadante, Thomas F (LABOR)
    Cc: avramil_at_concentric.net; oracle-l_at_freelists.org;
    oracle-l-bounce_at_freelists.org
    Subject: RE: Terabytes of archived redo logs


    Here's a, perhaps, wild, thought. Could you establish Physical Standby
    databases on (an)other server(s)? Then you could let your Prod datbases
    automatically shovel the archived redo logs to them, periodically remove
    them from the Prod environment as you see they've been transferred to the
    Standbys. You could also gzip them on the Standby side to further save
    space. Gzip is such a CPU hog that I'd not want it running on the Prod
    server.

    You'd also get disaster recovery databases in the process. Just a
    thought.

    Jack C. Applewhite - Database Administrator
    Austin (Texas) Independent School District
    512.414.9715 (wk) / 512.935.5929 (pager)

    Same-Day Stump Grinding! Senior Discounts!

    Mike's Tree Service
  • Dennis Williams at May 3, 2007 at 12:08 pm
    Lou,

    Online backups and no downtime are not the issue. The first question is why
    so much redo log? Potentially you could examine the SQL and see if something
    could be changed. Next, you say it is spread over 5 databases. That seems to
    make things a little more reasonable.

    As far as the backup, does your disk hardware offer something like a
    snapshot capability. Where you could take a snapshot of the data, then mount
    the snapshot on a backup server and perform an RMAN backup?

    Dennis Williams
    On 5/3/07, Lou Avrami wrote:

    Hi folks,

    I'm about to inherit an interesting project - a group of five 9.2.0.6databases that produce approximately 2 terabytes (!) of archived redo log
    files per day.

    Apparently the vendor configured the HP ServiceGuard clusters in such a
    way that it takes over an hour to shut down all of the packages in order to
    shut down the database. This amount of downtime supposedly can't be
    supported, so they decided to go with online backups and no downtime.

    Does anyone out there have any suggestions on handling 400 gig of archived
    redo log files per day? I was thinking of either a neear-continuous RMAN
    job or shell cron that would write the logs to either tape or a storage
    server. Actually, I think that our tape library might be overwhelmed also
    by the constant write activity. My thinking right now is a storage server
    and utilizing a dedicated fast network connection to push the logs
    over. Storage though might be an issue.

    If anyone has any thoughts or suggestions, they would be appreciated.
    BTW, I already had the bright idea of NOARCHIVELOG mode and cold
    backups. :)

    Thanks,
    Lou Avrami




    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Mark W. Farnham at May 3, 2007 at 2:02 pm
    Okay, let's do a little math...

    First the ground rules. Since I have to follow Farnham's Rule of DBA job
    security (never trust your career to a single piece of spinning or ribbon
    rust), and since there is no real point of backups if a simple fire in your
    datacenter can put you out of business, I think you need to make two copies.

    One cheap way to do this is removable disk drives. You can price this up
    yourself easily and then let the disk drive vendor compete against that
    price for bells, whistles, and ease of use.

    Now the good news is that as backup receiver targets, you don't really need
    too many spindles, unless you need to have a really fast recovery speed.

    You can get a 1 TB usb/firewire portable drive for about $410 USD retail
    that will easily sustain 50 MB/second. So let's say you need 10 of those to
    have a nice rotation with time to fetch a pair of replacements if one goes
    bad.

    So that's $4100 (again, retail, you can do better than that).

    And you probably don't want to make the copies on your host, so throw in
    another $900 for a high bus speed PC with at least a couple independent USB
    2 controllers or firewire or firewire 800.

    So twice a day you dump 1 TB of archived redo onto one of these puppies.
    That's going to take about 20,000 seconds, or less than 6 hours at the very
    conservative 50 MB/sec. You might pipe the files through a checksum if you
    want to know you copied what you thought you copied.

    When the copy finishes, you plug that drive and another into the PC and copy
    for another 6 hours. If you're doing checksums maybe you invested in a pair
    of chips for that PC. Now you can take one of those off site and delete that
    1 TB from your host and you have another 12 hours to deal with your other 1
    TB.

    I'm sure you can do better than that, and I suppose if your folks insist on
    ribbon rust you might have to copy to tapes as well, but then I hate tape
    drives and have an unfair bias against them. The only disk drives I ever
    hated were the ones with "floculant sticktion" that would pretty reliable
    fail to restart if you spun them down after 40 days running and you had
    about a 50-50 chance of "fixing" them following the instructions of the
    field diagram that looked like a guy throwing a discus except for the part
    about not letting go... but that's another story - though I'll bet at least
    2 people on this list other than me actually had to do that.

    Anyway, I'm sure you can put a reasonable solution in place for under
    $10,000 USD. Oh - keep a little chart on the MTBF for those drives and cut
    it in half since you're shipping them a lot.

    Regards,

    mwf

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Lou Avrami
    Sent: Thursday, May 03, 2007 1:09 AM
    To: oracle-l_at_freelists.org
    Subject: Terabytes of archived redo logs

    Hi folks,

    I'm about to inherit an interesting project - a group of five 9.2.0.6
    databases that produce approximately 2 terabytes (!) of archived redo log
    files per day.

    Apparently the vendor configured the HP ServiceGuard clusters in such a way
    that it takes over an hour to shut down all of the packages in order to shut
    down the database. This amount of downtime supposedly can't be supported,
    so they decided to go with online backups and no downtime.

    Does anyone out there have any suggestions on handling 400 gig of archived
    redo log files per day? I was thinking of either a neear-continuous RMAN
    job or shell cron that would write the logs to either tape or a storage
    server. Actually, I think that our tape library might be overwhelmed also
    by the constant write activity. My thinking right now is a storage server
    and utilizing a dedicated fast network connection to push the logs over.
    Storage though might be an issue.

    If anyone has any thoughts or suggestions, they would be appreciated.
    BTW, I already had the bright idea of NOARCHIVELOG mode and cold backups.
    :)

    Thanks,
    Lou Avrami

    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedMay 3, '07 at 5:08a
activeMay 3, '07 at 6:01p
posts7
users5
websiteoracle.com

People

Translate

site design / logo © 2022 Grokbase