FAQ
Hi List

I have a client with storage technology that allows copy on write snapshots
to create a writeable copy of a storage volume. They are looking at
potentially using this technology to provision clones of a DR database for
development/testing and reporting purposes. The idea being that as these
databases would be a) short lived and b) have limited changed data
block volume going through them and c) not have high performance
requirements they could save considerable amounts of storage by splitting
off a clone using the snapshot technology rather than a conventional oracle
based approach. I'm aware of Delphix Database virtualization which looks
like it addresses similar issues in a similar way. Is anyone out there doing
something similar - it sounds to me like one of those great ideas that have
a huge gotcha that I can't think of right now.

Search Discussions

  • Marcin Przepiorowski at Dec 20, 2010 at 12:31 pm

    On Mon, Dec 20, 2010 at 12:07 PM, Niall Litchfield wrote:

    like it addresses similar issues in a similar way. Is anyone out there doing
    something similar - it sounds to me like one of those great ideas that have
    a huge gotcha that I can't think of right now.
    Hi Niall,

    I don't have similar solution but I remember some discussion with my
    colleague who has it in place.
    He mentioned about two issues:

    initial overhead just after snapshot when your primary DB write to
    block and this block has to be copied into snapshot area.
    read overhead - all clones are using this same unchanged disk block
    as primary database

    If you can't limit number of IOPS from clones it can become a
    potential bottleneck.
  • Michael Dinh at Dec 20, 2010 at 1:57 pm
    Hello Nial,

    We are currently doing the same thing here for development, QA, Level3.

    There is a snaphot pool (storage) that holds all the changed blocks.

    With 8 OLTP and 3 environments = 24 snapshots which can fill the snaphot pool up quickly.

    One way to resolve the snapshot pool filling up is to do a resnap which has caused disruptions to development and QA.

    We actually have a client who wants real production data for testing and we provide new snap for them every 2 weeks.

    It's a great idea as long as changes are a minimum or have a large snapshot pool.

    Depending on how often an environments are renapped, it can be a PITA.

    HTH

    -Michael.

    From: oracle-l-bounce_at_freelists.org [oracle-l-bounce_at_freelists.org] On Behalf Of Niall Litchfield [niall.litchfield_at_gmail.com]
    Sent: Monday, December 20, 2010 4:07 AM
    To: ORACLE-L
    Subject: creative use of storage snapshots.

    Hi List

    I have a client with storage technology that allows copy on write snapshots to create a writeable copy of a storage volume. They are looking at potentially using this technology to provision clones of a DR database for development/testing and reporting purposes. The idea being that as these databases would be a) short lived and b) have limited changed data block volume going through them and c) not have high performance requirements they could save considerable amounts of storage by splitting off a clone using the snapshot technology rather than a conventional oracle based approach. I'm aware of Delphix Database virtualization which looks like it addresses similar issues in a similar way. Is anyone out there doing something similar - it sounds to me like one of those great ideas that have a huge gotcha that I can't think of right now.
  • kyle Hailey at Dec 21, 2010 at 12:04 am
  • Nuno Souto at Dec 21, 2010 at 11:58 am
    Works like a charm, been doing it for nearly 2 years now with our DW.

    Things to watch out for:
    - the snapshot dbs are not full-on production ones, they have to be strictly for
    development/testing purposes.
    - if the gap between the snapshot and the original gets too big, the subsequent
    resynchs can take quite a long time. Work on the basis of number of blocks
    changed: the more, the slower the catch up will be.

    There are a number of technologies that can reduce the overhead of the first
    copy of each changed block of the snapshot. Netapp has an interesting mechanism
    that reduces the overhead to a single write operation, EMC is more of a
    read/compare/write proposition.

    Worth re-stating: no one runs production on a snapshot, so whichever technology
    you use don't expect miracles. Once the expectation is set right, the
    technology works like a charm and is eminently suitable to the "refresh/test"
    cycle of development environments. It makes all those refreshes a matter of
    minutes, rather than hours on end.

    We're planning to expand this approach to all dbs: MSSQL and Oracle, once we
    finally decide on our new hardware platform. Coming up in a couple of months.
    We're expecting huge reductions in volume for multiple Peoplesoft HR, ERP and
    CRM environments, where everyone seems to need a copy of the entire database to
    test a single invoice...

    --
    Cheers
    Nuno Souto
    in sunny Sydney, Australia
    dbvision_at_iinet.net.au

    Niall Litchfield wrote,on my timestamp of 20/12/2010 11:07 PM:
    Hi List
    I have a client with storage technology that allows copy on write snapshots to
    create a writeable copy of a storage volume. They are looking at potentially
    using this technology to provision clones of a DR database for
    development/testing and reporting purposes. The idea being that as these
    databases would be a) short lived and b) have limited changed data block volume
    going through them and c) not have high performance requirements they could save
    considerable amounts of storage by splitting off a clone using the snapshot
    technology rather than a conventional oracle based approach. I'm aware of
    Delphix Database virtualization which looks like it addresses similar issues in
    a similar way. Is anyone out there doing something similar - it sounds to me
    like one of those great ideas that have a huge gotcha that I can't think of
    right now.
    --
    http://www.freelists.org/webpage/oracle-l
  • Svetoslav Gyurov at Dec 21, 2010 at 12:01 pm
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">

    Hi Niall,

    It's a great technology which saves a lot of time and space. I've
    done this with EVA6100 for a local bank for a reporting purpose.
    They were running three node RAC which is used for their core
    banking and another single machine for reporting. Just to mention
    they were using two virtual disks, one for DATA and one for FRA.
    Snapshot were created without any preparation of the RAC. As
    Martin said, we were using the same procedure. As very basic we
    were doing snapshot of the DATA disk, presenting snapshot disk to
    the reporting machine, starting in mount, changing some parameters
    and then opening the database. For the purpose of 1-day old
    reporting it is perfect solution.

    Using snapshot (not snapclone) was very convenient, because at the
    end of the day only few percent’s were filled up of the snapshot.
    For example if the DATA disk is 1TB big the snapshot and the of
    the end would be 10-20GB maximum (it really depends on how
    intensive is the workload on the database). As the snapshot is
    deleted and re-created everyday this was very space saving scheme
    and we didn't notice any performance issues on the production
    database. Although this process can be automated they were using
    the GUI for single disk snapshots.

    Here comes the limitations (at least of the EVA):
    1. With the last version of the firmware for 4/6/8400, HP
    introduced LUNs bigger than 2TB, which is a breakthrough. I
    recently discovered that neither snapshots nor snapclones bigger
    than 2TB can be created!
    2. If one wants to create a snapshot of two disks simultaneously
    (multisnap) the GUI cannot be used any longer.
    3. Creating multisnap requires fully allocated containers! i.e.
    the size of the LUNs. Although the snapshot is immediately created
    is it using now the same size as the original virtual disks.

    Except for reporting, test and dev these snapshot and snapclones
    could also be used for backup. In case of doing database or
    application upgrades this can be used as very fast recovery
    solution.

    Regards,
    Sve

    On 12/20/2010 02:07 PM, Niall Litchfield wrote:

    Hi List

    I have a client with storage
    technology that allows copy on write snapshots to create a
    writeable copy of a storage volume. They are looking at
    potentially using this technology to provision clones of a DR
    database for development/testing and reporting purposes. The
    idea being that as these databases would be a) short lived and
    b) have limited changed data block volume going through them
    and c) not have high performance requirements they could save
    considerable amounts of storage by splitting off a clone using
    the snapshot technology rather than a conventional oracle
    based approach. I'm aware of Delphix Database virtualization
    which looks like it addresses similar issues in a similar way.
    Is anyone out there doing something similar - it sounds to me
    like one of those great ideas that have a huge gotcha that I
    can't think of right now.  Â
  • Niall Litchfield at Dec 21, 2010 at 5:10 pm
    Thanks to all for the great replies, especially to Kyle for the whitepaper
    link and to those with real life experience in serious oracle shops, its
    nice to know I'm not (that) insane. And to everyone for not mentioning dba
    2.0 even though this is a great example of it, even to the way it came
    about. If project and client permits I'll likely blog it - first got to get
    the project started.

    On 21 Dec 2010 12:03, "Svetoslav Gyurov" wrote:

    Hi Niall,

    It's a great technology which saves a lot of time and space. I've done this
    with EVA6100 for a local bank for a reporting purpose. They were running
    three node RAC which is used for their core banking and another single
    machine for reporting. Just to mention they were using two virtual disks,
    one for DATA and one for FRA. Snapshot were created without any preparation
    of the RAC. As Martin said, we were using the same procedure. As very basic
    we were doing snapshot of the DATA disk, presenting snapshot disk to the
    reporting machine, starting in mount, changing some parameters and then
    opening the database. For the purpose of 1-day old reporting it is perfect
    solution.

    Using snapshot (not snapclone) was very convenient, because at the end of
    the day only few percent’s were filled up of the snapshot. For example if
    the DATA disk is 1TB big the snapshot and the of the end would be 10-20GB
    maximum (it really depends on how intensive is the workload on the
    database). As the snapshot is deleted and re-created everyday this was very
    space saving scheme and we didn't notice any performance issues on the
    production database. Although this process can be automated they were using
    the GUI for single disk snapshots.

    Here comes the limitations (at least of the EVA):
    1. With the last version of the firmware for 4/6/8400, HP introduced LUNs
    bigger than 2TB, which is a breakthrough. I recently discovered that neither
    snapshots nor snapclones bigger than 2TB can be created!
    2. If one wants to create a snapshot of two disks simultaneously (multisnap)
    the GUI cannot be used any longer.
    3. Creating multisnap requires fully allocated containers! i.e. the size of
    the LUNs. Although the snapshot is immediately created is it using now the
    same size as the original virtual disks.

    Except for reporting, test and dev these snapshot and snapclones could also
    be used for backup. In case of doing database or application upgrades this
    can be used as very fast recovery solution.

    Regards,
    Sve
    On 12/20/2010 02:07 PM, Niall Litchfield wrote:


    Hi List

    I have a client with storage technology that allows copy on write
    snapshots to cr...
    -- http://www.freelists.org/webpage/oracle-l
  • Glenn Santa Cruz at Dec 21, 2010 at 5:26 pm
    Grateful to all who have weighed in on this topic.

    Could you share experiences using these same tools/techniques in keeping a
    backup retention with RMAN ? We've been considering using techniques as
    described to achieve a backup retention of 30 days (per our corporate
    policy), leveraging snapshots to accomplish it. The basic idea is to "seed"
    a volume with a full RMAN backup; then on a daily basis, perform a snapshot
    of the volume, issue our RMAN backup (applying changes to the datafiles),
    then delete any snapshots older than the retention period. Our thoughts are
    that the SAN would maintain 30 snapshots -- if we need a restoration of any
    of these points, we'd present the snapshot as a LUN to a secondary host.

    Naturally, all of the above is to be done on non-production hardware (
    against a dataguard standby database ).

    Is anyone doing something similar? Pitfalls? This is on HP EVA 4400.

    Thanks

    On Tue, Dec 21, 2010 at 11:10 AM, Niall Litchfield <
    niall.litchfield_at_gmail.com> wrote:
    Thanks to all for the great replies, especially to Kyle for the whitepaper
    link and to those with real life experience in serious oracle shops, its
    nice to know I'm not (that) insane. And to everyone for not mentioning dba
    2.0 even though this is a great example of it, even to the way it came
    about. If project and client permits I'll likely blog it - first got to get
    the project started.

    On 21 Dec 2010 12:03, "Svetoslav Gyurov" wrote:

    Hi Niall,

    It's a great technology which saves a lot of time and space. I've done this
    with EVA6100 for a local bank for a reporting purpose. They were running
    three node RAC which is used for their core banking and another single
    machine for reporting. Just to mention they were using two virtual disks,
    one for DATA and one for FRA. Snapshot were created without any preparation
    of the RAC. As Martin said, we were using the same procedure. As very basic
    we were doing snapshot of the DATA disk, presenting snapshot disk to the
    reporting machine, starting in mount, changing some parameters and then
    opening the database. For the purpose of 1-day old reporting it is perfect
    solution.

    Using snapshot (not snapclone) was very convenient, because at the end of
    the day only few percent�s were filled up of the snapshot. For example if
    the DATA disk is 1TB big the snapshot and the of the end would be 10-20GB
    maximum (it really depends on how intensive is the workload on the
    database). As the snapshot is deleted and re-created everyday this was very
    space saving scheme and we didn't notice any performance issues on the
    production database. Although this process can be automated they were using
    the GUI for single disk snapshots.

    Here comes the limitations (at least of the EVA):
    1. With the last version of the firmware for 4/6/8400, HP introduced LUNs
    bigger than 2TB, which is a breakthrough. I recently discovered that neither
    snapshots nor snapclones bigger than 2TB can be created!
    2. If one wants to create a snapshot of two disks simultaneously
    (multisnap) the GUI cannot be used any longer.
    3. Creating multisnap requires fully allocated containers! i.e. the size of
    the LUNs. Although the snapshot is immediately created is it using now the
    same size as the original virtual disks.

    Except for reporting, test and dev these snapshot and snapclones could also
    be used for backup. In case of doing database or application upgrades this
    can be used as very fast recovery solution.


    Regards,
    Sve







    On 12/20/2010 02:07 PM, Niall Litchfield wrote:

    Hi List

    I have a client with storage technology that allows copy on write
    snapshots to cr...
    -- http://www.freelists.org/webpage/oracle-l
    --
    http://www.freelists.org/webpage/oracle-l
  • Rajendra.pande_at_ubs.com at Dec 21, 2010 at 5:36 pm
    Yes we do

    Take a one time full image copy
    Do a daily incremental - merge with the existing image
    Take a snapshot (this is using data domain) and do some
    housekeeping in RMAN to make this a valid backup as for the date.
    And so on

    A DG or not should not make a difference

    I understand that NetApp has a slightly different snapshot process -
    netapp snapshot is for the device not for the directory. So I guess you
    will need a different solution with Netapp but I guess this will pretty
    much work there also. The technique will have to be slightly different



    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Glenn Santa Cruz
    Sent: Tuesday, December 21, 2010 12:26 PM
    To: ORACLE-L
    Subject: Re: creative use of storage snapshots.



    Grateful to all who have weighed in on this topic.

    Could you share experiences using these same tools/techniques in keeping
    a backup retention with RMAN ? We've been considering using techniques
    as described to achieve a backup retention of 30 days (per our corporate
    policy), leveraging snapshots to accomplish it. The basic idea is to
    "seed" a volume with a full RMAN backup; then on a daily basis, perform
    a snapshot of the volume, issue our RMAN backup (applying changes to the
    datafiles), then delete any snapshots older than the retention period.
    Our thoughts are that the SAN would maintain 30 snapshots -- if we need
    a restoration of any of these points, we'd present the snapshot as a LUN
    to a secondary host.

    Naturally, all of the above is to be done on non-production hardware (
    against a dataguard standby database ).

    Is anyone doing something similar? Pitfalls? This is on HP EVA 4400.

    Thanks

    On Tue, Dec 21, 2010 at 11:10 AM, Niall Litchfield
    wrote:

    Thanks to all for the great replies, especially to Kyle for the
    whitepaper link and to those with real life experience in serious oracle
    shops, its nice to know I'm not (that) insane. And to everyone for not
    mentioning dba 2.0 even though this is a great example of it, even to
    the way it came about. If project and client permits I'll likely blog it
    - first got to get the project started.

    On 21 Dec 2010 12:03, "Svetoslav Gyurov"
    wrote:

    Hi Niall,


    It's a great technology which saves a lot of time and space.
    I've done this with EVA6100 for a local bank for a reporting purpose.
    They were running three node RAC which is used for their core banking
    and another single machine for reporting. Just to mention they were
    using two virtual disks, one for DATA and one for FRA. Snapshot were
    created without any preparation of the RAC. As Martin said, we were
    using the same procedure. As very basic we were doing snapshot of the
    DATA disk, presenting snapshot disk to the reporting machine, starting
    in mount, changing some parameters and then opening the database. For
    the purpose of 1-day old reporting it is perfect solution.


    Using snapshot (not snapclone) was very convenient, because at
    the end of the day only few percent's were filled up of the snapshot.
    For example if the DATA disk is 1TB big the snapshot and the of the end
    would be 10-20GB maximum (it really depends on how intensive is the
    workload on the database). As the snapshot is deleted and re-created
    everyday this was very space saving scheme and we didn't notice any
    performance issues on the production database. Although this process can
    be automated they were using the GUI for single disk snapshots.


    Here comes the limitations (at least of the EVA):
    1. With the last version of the firmware for 4/6/8400, HP

    introduced LUNs bigger than 2TB, which is a breakthrough. I recently
    discovered that neither snapshots nor snapclones bigger than 2TB can be
    created!

    2. If one wants to create a snapshot of two disks simultaneously
    (multisnap) the GUI cannot be used any longer.

    3. Creating multisnap requires fully allocated containers! i.e.
    the size of the LUNs. Although the snapshot is immediately created is it
    using now the same size as the original virtual disks.


    Except for reporting, test and dev these snapshot and snapclones
    could also be used for backup. In case of doing database or application
    upgrades this can be used as very fast recovery solution.



    Regards,
    Sve

    On 12/20/2010 02:07 PM, Niall Litchfield wrote:

    >
    Hi List >
    I have a client with storage technology that allows copy on
    write snapshots to cr...

    http://www.freelists.org/webpage/oracle-l

    Please visit our website at
    http://financialservicesinc.ubs.com/wealth/E-maildisclaimer.html
    for important disclosures and information about our e-mail
    policies. For your protection, please do not transmit orders
    or instructions by e-mail or include account numbers, Social
    Security numbers, credit card numbers, passwords, or other
    personal information.
  • David Roberts at Dec 21, 2010 at 7:17 pm
    One point, that I don't see mentioned (unless I missed it) is if you are
    using some form of block level replication as a DR solution, what happens
    when the disaster is the disk controller writing garbage to your disk.

    If you are using DG, then depending on the type you will
    get varying early opportunities to spot the corruption or opportunities to
    recover from it. Opportunities that are lacking when you blindly have
    hardware copping data blocks.

    I agree that these are fine solutions to providing development and testing
    environments, but I would suggest caution with regards adopting these
    technologies for DR purposes.

    Dave
  • kyle Hailey at Dec 23, 2010 at 12:15 am
    As an FYI, small side note : It's let's make a deal time at Delphix. If you
    have ever had the fun of closing an Oracle deal at their fiscal year end,
    you know what happens. Not only is it fiscal year end but Delphix is also
    millimeters away from adding an extra figure to their end of quarter total
    and I'm betting that now is the last time these low of prices will ever be
    seen - just my view point from being inside the castle.

    If you can imagine the ease and savings of virtualizing databases

    Super fast provision � three clicks and a few minutes to stand up a
    fully functional, point-in-time 10g/11g database copy/clone

    Storage savings - a new database copy basically only consists of some
    pointers and the space it takes for private redo and temp
    then this might be of interest.

    Delphix doesn't require filesystem snashots or EMC or NetApps. It only
    requires x86 box with about the same amount of the disk space of the
    database you want to virtualize. The source database is copied onto the
    Delphix machine with RMAN calls, thus validating the data, the data is
    compressed by Delphix and Delphix handles the snapshots and provisioning of
    virtual databases. Virtual database can be provisioned from the original
    source copy, any incremental snapshot or any SCN. Then you can make as many
    copies as you want, with in reason, for almost free in terms of storage.

    Here is a demo to give you and idea of how it works
    http://delphix.com/resources.php?tab=product-demo

    Appologies if this sounds like sales pitch, it is, I'm excited about what
    Delphix is doing, but it's also a rare opportunity. Unlike, DB Optimizer
    opportunity, this deal is definitely a departmental level buy, so I imagine
    this deal is only of interest for departmental heads that have some extra
    budget that will be lost at year end and are looking for a good way to use
    that budget.

    -Kyle

    PS for these cut rate deals, everything has to be in by next Friday at the
    absolute latest. For best info contact
    kaycee.lai_at_delphix.com and/or garrett.stanton_at_delphix.com

    On Wed, Dec 22, 2010 at 11:27 AM, David Roberts <
    big.dave.roberts_at_googlemail.com> wrote:
    While I accept your primary argument that loosing data on a SAN
    is difficult. I would also observe that the not 'out of the box' precautions
    you have taken in addition to this also reduce the chances of data loss.

    Nevertheless, there are those that will be using non-SAN based replication
    (We in the past used SNDR from a local server to a remote server) and there
    are these persistent tales of data loss from SANs, the validity and
    numerical significance of which are difficult to judge.

    By nature, disasters tend to be unexpected and different in nature to those
    that you have tackled before.

    I do admit that at a certain level of protection it might not be cost
    effective or economically justifiable to implement the highest levels of
    data resilience for all organisations. However, the comfort that DG could be
    replicating my data between 2 systems at a higher level (than a SAN or
    operating system does) would give me a greater degree of confidence. In one
    case (with DG) I could be replicating from one manufactures hardware and
    operating system to another manufactures hardware and operating system. And
    I would tend to trust the highest level of replication (apart from bespoke
    replication codded by local developers and not implemented elsewhere) more
    than that provided by hardware providers.


    Always remember, 'There's software in those BIOSs'

    Regards,

    David Roberts
    On Wed, Dec 22, 2010 at 10:59 AM, Nuno Souto wrote:

    That is an argument often invoked to support DG, but it doesn't take into
    account how replication is done in most modern SAN devices.

    For example: in our EMC we replicate the FRA with the last RMAN backup,
    then
    every 2 hours we archive redo logs to FRA and replicate them using
    asynchronous
    command-line initiated replication. Due to the way EMC does replication,
    any
    potential disparity between blocks will get corrected on the next send.
    And it
    has not happened once in nearly 2 years we've been doing it!

    Oh, BTW: it's not the disk controller that does that, it's a completely
    different mechanism than the SAN disk io. I suspect whoever came up with
    that
    "danger" really hasn't used a late generation SAN.

    I'll wear the risk of two consecutive transmission errors on FC - recall
    that it
    is subjected to parity and ECC as well - against what it'd cost us to get
    an
    IP-based connection resilient and performant enough to do DG at our volume
    and
    performance point. In fact, I know exactly what it'd cost us and it's
    simply
    not feasible or cost-effective.

    What Oracle should do is make DG independent of the transport layer. IE,
    if I
    want to use Oracle's IP-based transport, or ftp, or scp, or a script, or
    navicli, or dark fibre non-IP, or carrier pigeons/smoke signs, it's
    entirely up
    to me and let me do it. There is really no reason why DG has to be
    IP-only.

    --
    Cheers
    Nuno Souto
    dbvision_at_iinet.net.au


    David Roberts wrote,on my timestamp of 22/12/2010 6:17 AM:
    One point, that I don't see mentioned (unless I missed it) is if you are
    using
    some form of block level replication as a DR solution, what happens when
    the
    disaster is the disk controller writing garbage to your disk.

    If you are using DG, then depending on the type you will
    get varying early opportunities to spot the corruption or opportunities
    to
    recover from it. Opportunities that are lacking when you blindly have
    hardware copping data blocks.

    I agree that these are fine solutions to providing development and
    testing
    environments, but I would suggest caution with regards adopting these
    technologies for DR purposes.

    --

    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedDec 20, '10 at 12:07p
activeDec 23, '10 at 12:15a
posts11
users9
websiteoracle.com

People

Translate

site design / logo © 2022 Grokbase