FAQ
Hi,

We have an Oracle 10g database on RHEL5. I would love to find a way
to speed up the export
of the database. It currently takes about 7.5 hours. The main
problem is with 2 tables, one of
which is 30GB, the other is 70GB.

I've looked through the documentation for expdp, as well as gone
through all of the Metalink notes
I could find that were related to expdp running slowly. I've tweaked
things here and there, but I
can't get the export to run any faster. I know the system is a little
slow, and I'm writing to an
NFS mounted drive, but I was really hoping that this could be done
faster. I haven't tried using
exp instead, but I'm considering that for another test.

Any suggestions?

- Maureen

Search Discussions

  • D'Hooge Freek at May 5, 2012 at 5:51 am
    Maureen,

    Is your database an enterprise edition or a standard edition?

    If you have an enterprise edition, then expdp can work with multiple parallel slaves.
    One thing you then need to look for is if the 2 large tables are also exported in parallel or if only 1 process is exporting them.
    If the table for instance has a lob column, then oracle can't export it in parallel.
    Sometimes Oracle chooses for some reason wrongfully the direct path method for a table, which disables the parallel feature. In that case you can try to force expdp to use the external table method.

    The notes below give you a good starting point on how to improve the performance of datapump and how to check what is happening:

    Checklist for Slow Performance of Export Data Pump (expdp) and Import DataPump (impdp) [ID 453895.1]
    Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump [ID 286496.1]
    Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ? [ID 552424.1]


    Kind regards,

    Freek D'Hooge
    Uptime
    Oracle Database Administrator
    email: [email protected]
    tel +32(0)3 451 23 82
    http://www.uptime.be
    disclaimer: www.uptime.be/disclaimer
    -----Original Message-----
    From: [email protected] On Behalf Of Maureen English
    Sent: zaterdag 5 mei 2012 3:58
    To: [email protected]
    Subject: speeding up expdp

    Hi,

    We have an Oracle 10g database on RHEL5. I would love to find a way
    to speed up the export
    of the database. It currently takes about 7.5 hours. The main
    problem is with 2 tables, one of
    which is 30GB, the other is 70GB.

    I've looked through the documentation for expdp, as well as gone
    through all of the Metalink notes
    I could find that were related to expdp running slowly. I've tweaked
    things here and there, but I
    can't get the export to run any faster. I know the system is a little
    slow, and I'm writing to an
    NFS mounted drive, but I was really hoping that this could be done
    faster. I haven't tried using
    exp instead, but I'm considering that for another test.

    Any suggestions?

    - Maureen
    --
    http://www.freelists.org/webpage/oracle-l


    --
    http://www.freelists.org/webpage/oracle-l
  • Maureen English at May 5, 2012 at 5:53 pm
    The database is Enterprise Edition.

    I've tried the export with parallel=2, parallel=4 and parallel=6 and
    all take about the
    same amount of time. I also set the database parameters
    parallel_min_servers and
    parallel_max_servers to 6 and 12, respectively. I see the processes
    start up in the
    database at the beginning of the export, but I didn't check to see how
    many were
    still running when the export was done with everything but these last
    2 tables. I've
    just been watching the dump files...only 2 are being written to at the end.

    I do believe that both of these tables have a lob column. Does that
    mean I just have
    to live with the 7.5 hours? At least the import to the new machine
    only takes about
    1.5 hours.

    I'm also excluding indexes/constraints for the export and import.

    I'm pretty sure I read through the checklist document, but not the
    other 2. I'll
    check them out.

    Thanks!

    - Maureen
    On Fri, May 4, 2012 at 9:51 PM, D'Hooge Freek wrote:
    Maureen,

    Is your database an enterprise edition or a standard edition?

    If you have an enterprise edition, then expdp can work with multiple parallel slaves.
    One thing you then need to look for is if the 2 large tables are also exported in parallel or if only 1 process is exporting them.
    If the table for instance has a lob column, then oracle can't export it in parallel.
    Sometimes Oracle chooses for some reason wrongfully the direct path method for a table, which disables the parallel feature. In that case you can try to force expdp to use the external table method.

    The notes below give you a good starting point on how to improve the performance of datapump and how to check what is happening:

    Checklist for Slow Performance of Export Data Pump (expdp) and Import DataPump (impdp) [ID 453895.1]
    Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump [ID 286496.1]
    Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ? [ID 552424.1]


    Kind regards,

    Freek D'Hooge
    Uptime
    Oracle Database Administrator
    email: [email protected]
    tel +32(0)3 451 23 82
    http://www.uptime.be
    disclaimer: www.uptime.be/disclaimer
    -----Original Message-----
    From: [email protected] On Behalf Of Maureen English
    Sent: zaterdag 5 mei 2012 3:58
    To: [email protected]
    Subject: speeding up expdp

    Hi,

    We have an Oracle 10g database on RHEL5.  I would love to find a way
    to speed up the export
    of the database.  It currently takes about 7.5 hours.  The main
    problem is with 2 tables, one of
    which is 30GB, the other is 70GB.

    I've looked through the documentation for expdp, as well as gone
    through all of the Metalink notes
    I could find that were related to expdp running slowly.  I've tweaked
    things here and there, but I
    can't get the export to run any faster.  I know the system is a little
    slow, and I'm writing to an
    NFS mounted drive, but I was really hoping that this could be done
    faster.  I haven't tried using
    exp instead, but I'm considering that for another test.

    Any suggestions?

    - Maureen
    --
    http://www.freelists.org/webpage/oracle-l
    --
    http://www.freelists.org/webpage/oracle-l
  • GG at May 5, 2012 at 7:10 pm
    W dniu 2012-05-05 19:52, Maureen English pisze:
    I do believe that both of these tables have a lob column. Does that
    mean I just have
    to live with the 7.5 hours? At least the import to the new machine
    only takes about
    1.5 hours.

    I'm also excluding indexes/constraints for the export and import.

    I'm pretty sure I read through the checklist document, but not the
    other 2. I'll
    check them out.
    Hi,
    lobs are pain and there is no magic way to make them performance friendly .
    You have to be creative in such situations .
    How about moving them to separated tablespace via online redefinition
    and then transport that tablespace and 'plug in' .

    Secondly, You have to know Your system limits , maybe thats the case why
    parallel did not help.
    Regards
    GregG
  • Guillermo Alan Bort at May 5, 2012 at 7:13 pm
    I would suggest benchmarking the NFS. If you have a file of similar size
    (you can create it on the fly, though) you can try copying the file
    directly from the database storage (which I'm hoping is SAN) to the target
    directory (NAS) and see the maximum throughput you can get. I've had some
    dealings with NFS and it can be astoundingly slow (something the database
    seems to hide well enough using the SGA but is a pain for massive io)
    hth
    Alan.-


    On Sat, May 5, 2012 at 2:52 PM, Maureen English
    wrote:
    not quoting this
  • Michael Dinh at May 6, 2012 at 12:44 am
    I feel sorry for you, but misery loves company :=)

    4GB database full backup using RMAN takes over 6 hours to NFS.

    Off the tangent now.

    dumpfile=migration_wh%U.dmp <== important since this was the mistake I made before.
    PARALLEL=12

    Hopefully, it's that easy. Otherwise, there are several articles on configuring NFS for use with Oracle.

    Michael Dinh
    Disparity Breaks Automation (DBA)

    Confidence comes not from always being right but from not fearing to be wrong - Peter T Mcintyre
    Great minds discuss ideas; average minds discuss events; small minds discuss people - Eleanor Roosevelt
    When any rule or formula becomes a substitute for thought rather than an aid to thinking, it is dangerous and should be discarded -Thomas William Phelps

    -----Original Message-----
    From: [email protected] On Behalf Of Maureen English
    Sent: Saturday, May 05, 2012 10:53 AM
    To: D'Hooge Freek
    Cc: [email protected]
    Subject: Re: speeding up expdp

    The database is Enterprise Edition.

    I've tried the export with parallel=2, parallel=4 and parallel=6 and
    all take about the
    same amount of time. I also set the database parameters
    parallel_min_servers and
    parallel_max_servers to 6 and 12, respectively. I see the processes
    start up in the
    database at the beginning of the export, but I didn't check to see how
    many were
    still running when the export was done with everything but these last
    2 tables. I've
    just been watching the dump files...only 2 are being written to at the end.

    I do believe that both of these tables have a lob column. Does that
    mean I just have
    to live with the 7.5 hours? At least the import to the new machine
    only takes about
    1.5 hours.

    I'm also excluding indexes/constraints for the export and import.

    I'm pretty sure I read through the checklist document, but not the
    other 2. I'll
    check them out.

    Thanks!

    - Maureen
    On Fri, May 4, 2012 at 9:51 PM, D'Hooge Freek wrote:
    Maureen,

    Is your database an enterprise edition or a standard edition?

    If you have an enterprise edition, then expdp can work with multiple parallel slaves.
    One thing you then need to look for is if the 2 large tables are also exported in parallel or if only 1 process is exporting them.
    If the table for instance has a lob column, then oracle can't export it in parallel.
    Sometimes Oracle chooses for some reason wrongfully the direct path method for a table, which disables the parallel feature. In that case you can try to force expdp to use the external table method.

    The notes below give you a good starting point on how to improve the performance of datapump and how to check what is happening:

    Checklist for Slow Performance of Export Data Pump (expdp) and Import DataPump (impdp) [ID 453895.1]
    Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump [ID 286496.1]
    Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ? [ID 552424.1]


    Kind regards,

    Freek D'Hooge
    Uptime
    Oracle Database Administrator
    email: [email protected]
    tel +32(0)3 451 23 82
    http://www.uptime.be
    disclaimer: www.uptime.be/disclaimer
    -----Original Message-----
    From: [email protected] On Behalf Of Maureen English
    Sent: zaterdag 5 mei 2012 3:58
    To: [email protected]
    Subject: speeding up expdp

    Hi,

    We have an Oracle 10g database on RHEL5.  I would love to find a way
    to speed up the export
    of the database.  It currently takes about 7.5 hours.  The main
    problem is with 2 tables, one of
    which is 30GB, the other is 70GB.

    I've looked through the documentation for expdp, as well as gone
    through all of the Metalink notes
    I could find that were related to expdp running slowly.  I've tweaked
    things here and there, but I
    can't get the export to run any faster.  I know the system is a little
    slow, and I'm writing to an
    NFS mounted drive, but I was really hoping that this could be done
    faster.  I haven't tried using
    exp instead, but I'm considering that for another test.

    Any suggestions?

    - Maureen
    --
    http://www.freelists.org/webpage/oracle-l
    --
    http://www.freelists.org/webpage/oracle-l


    --
    http://www.freelists.org/webpage/oracle-l
  • Svetoslav Gyurov at May 6, 2012 at 7:40 pm
    Maureen, as Greg said you need to know your system limits, if you are
    waiting on CPU or on disk I/O. Also make sure you're exporting to
    different location than you're database files. Parallel option gives
    you big benefit if you have partition tables as they are exported by
    different processes.

    If it's applicable consider using transportable tablespaces ?


    Regards,
    Sve
  • Pawel Smolarz at May 7, 2012 at 9:55 am
    Hi,

    In addition to optimizing the export (expdp), would try "wsize"
    options for NFS mounted drive . Manipulation of this parameter may speed
    up the write operation. It seems that the bottleneck in this case is NFS.
    II had a similar case where a large file (2GB) was created by UTL_FILE.
    Perform fewer flush operations and increase wsize, greatly accelerated
    the write operation on NFS storage, also as I remeber NFS version (mount
    option) for NFS may have performance affect .

    Did you check export performance on local storage?



    Pozdrawiam / Regards,
    Paweł Smolarz


    W dniu 05.05.2012 19:52, Maureen English pisze:
    The database is Enterprise Edition.

    I've tried the export with parallel=2, parallel=4 and parallel=6 and
    all take about the
    same amount of time. I also set the database parameters
    parallel_min_servers and
    parallel_max_servers to 6 and 12, respectively. I see the processes
    start up in the
    database at the beginning of the export, but I didn't check to see how
    many were
    still running when the export was done with everything but these last
    2 tables. I've
    just been watching the dump files...only 2 are being written to at the end.

    I do believe that both of these tables have a lob column. Does that
    mean I just have
    to live with the 7.5 hours? At least the import to the new machine
    only takes about
    1.5 hours.

    I'm also excluding indexes/constraints for the export and import.

    I'm pretty sure I read through the checklist document, but not the
    other 2. I'll
    check them out.

    Thanks!

    - Maureen

    On Fri, May 4, 2012 at 9:51 PM, D'Hooge Freekwrote:
    Maureen,

    Is your database an enterprise edition or a standard edition?

    If you have an enterprise edition, then expdp can work with multiple parallel slaves.
    One thing you then need to look for is if the 2 large tables are also exported in parallel or if only 1 process is exporting them.
    If the table for instance has a lob column, then oracle can't export it in parallel.
    Sometimes Oracle chooses for some reason wrongfully the direct path method for a table, which disables the parallel feature. In that case you can try to force expdp to use the external table method.

    The notes below give you a good starting point on how to improve the performance of datapump and how to check what is happening:

    Checklist for Slow Performance of Export Data Pump (expdp) and Import DataPump (impdp) [ID 453895.1]
    Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump [ID 286496.1]
    Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ? [ID 552424.1]


    Kind regards,

    Freek D'Hooge
    Uptime
    Oracle Database Administrator
    email: [email protected]
    tel +32(0)3 451 23 82
    http://www.uptime.be
    disclaimer: www.uptime.be/disclaimer
    -----Original Message-----
    From: [email protected] On Behalf Of Maureen English
    Sent: zaterdag 5 mei 2012 3:58
    To: [email protected]
    Subject: speeding up expdp

    Hi,

    We have an Oracle 10g database on RHEL5. I would love to find a way
    to speed up the export
    of the database. It currently takes about 7.5 hours. The main
    problem is with 2 tables, one of
    which is 30GB, the other is 70GB.

    I've looked through the documentation for expdp, as well as gone
    through all of the Metalink notes
    I could find that were related to expdp running slowly. I've tweaked
    things here and there, but I
    can't get the export to run any faster. I know the system is a little
    slow, and I'm writing to an
    NFS mounted drive, but I was really hoping that this could be done
    faster. I haven't tried using
    exp instead, but I'm considering that for another test.

    Any suggestions?

    - Maureen
    --
    http://www.freelists.org/webpage/oracle-l
    --
    http://www.freelists.org/webpage/oracle-l
    Nordea Bank Polska S.A. z siedzibą w Gdyni, ul. Kielecka 2, 81-303 Gdynia,
    wpisaną do Rejestru Przedsiębiorców Krajowego Rejestru Sądowego pod numerem: 0000021828,
    dla której dokumentację przechowuje Sąd Rejonowy Gdańsk - Północ w Gdańsku,
    VIII Wydział Gospodarczy Krajowego Rejestru Sądowego,
    o kapitale zakładowym i wpłaconym w wysokości: 277.493.500,00 złotych,
    NIP: 586-000-78-20, REGON: 190024711--
    http://www.freelists.org/webpage/oracle-l
  • Remigiusz Sokolowski at May 7, 2012 at 10:28 am

    W dniu 07.05.2012 11:54, Pawel Smolarz pisze:
    Perform fewer flush operations and increase wsize, greatly accelerated
    the write operation on NFS storage, also as I remeber NFS version (mount
    option) for NFS may have performance affect .
    In case of UTL_FILE fewer flushes or even none at all, as this is done
    implicitly anyway.

    --
    Pole nakazi

    ----------------------------------------------------------------------
    Remigiusz Sokolowski <[email protected]>
    pos : DBA at DIiUSI
    addr : Nordea Bank Polska SA, Luzycka 6A Street, 81-537 Gdynia, Poland
    phone : +48 58 667 17 43
    mobile: +48 602 42 42 77
    Nordea Bank Polska S.A. z siedzibą w Gdyni, ul. Kielecka 2, 81-303 Gdynia,
    wpisaną do Rejestru Przedsiębiorców Krajowego Rejestru Sądowego pod numerem: 0000021828,
    dla której dokumentację przechowuje Sąd Rejonowy Gdańsk - Północ w Gdańsku,
    VIII Wydział Gospodarczy Krajowego Rejestru Sądowego,
    o kapitale zakładowym i wpłaconym w wysokości: 277.493.500,00 złotych,
    NIP: 586-000-78-20, REGON: 190024711--
    http://www.freelists.org/webpage/oracle-l
  • Maureen English at May 7, 2012 at 11:23 pm
    Thank you all for your suggestions, especially the ones with information on how to
    find out where the bottleneck might be!

    The sysadmin is looking at options for the NFS mount. Other than that, it looks like
    the the 2 large tables with LOB columns are my problem. I did try using exp instead
    of expdp and was able to reduce the export time by a third.

    There is some good news, though. The system I've been working on is a development
    system with 4 databases running on it. The production systems that I'll be doing
    the final export from are faster machines with more cpus and more memory and only
    one database on each system. I'm hoping to get access to the production systems soon
    in order to test the export of the 30G table and see how it compares to the dev system.

    - Maureen


    D'Hooge Freek wrote:
    Maureen,

    Is your database an enterprise edition or a standard edition?

    If you have an enterprise edition, then expdp can work with multiple parallel slaves.
    One thing you then need to look for is if the 2 large tables are also exported in parallel or if only 1 process is exporting them.
    If the table for instance has a lob column, then oracle can't export it in parallel.
    Sometimes Oracle chooses for some reason wrongfully the direct path method for a table, which disables the parallel feature. In that case you can try to force expdp to use the external table method.

    The notes below give you a good starting point on how to improve the performance of datapump and how to check what is happening:

    Checklist for Slow Performance of Export Data Pump (expdp) and Import DataPump (impdp) [ID 453895.1]
    Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump [ID 286496.1]
    Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ? [ID 552424.1]


    Kind regards,

    Freek D'Hooge
    Uptime
    Oracle Database Administrator
    email: [email protected]
    tel +32(0)3 451 23 82
    http://www.uptime.be
    disclaimer: www.uptime.be/disclaimer
    -----Original Message-----
    From: [email protected] On Behalf Of Maureen English
    Sent: zaterdag 5 mei 2012 3:58
    To: [email protected]
    Subject: speeding up expdp

    Hi,

    We have an Oracle 10g database on RHEL5. I would love to find a way
    to speed up the export
    of the database. It currently takes about 7.5 hours. The main
    problem is with 2 tables, one of
    which is 30GB, the other is 70GB.

    I've looked through the documentation for expdp, as well as gone
    through all of the Metalink notes
    I could find that were related to expdp running slowly. I've tweaked
    things here and there, but I
    can't get the export to run any faster. I know the system is a little
    slow, and I'm writing to an
    NFS mounted drive, but I was really hoping that this could be done
    faster. I haven't tried using
    exp instead, but I'm considering that for another test.

    Any suggestions?

    - Maureen
    --
    http://www.freelists.org/webpage/oracle-l
    --
    http://www.freelists.org/webpage/oracle-l

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedMay 5, '12 at 1:59a
activeMay 7, '12 at 11:23p
posts10
users8
websiteoracle.com

People

Translate

site design / logo © 2023 Grokbase