FAQ
Source DB: Dell server, RHEL4, 12G RAM, Oracle 64 bit 9.2.0.8 SE1, little
endian
Target DB: IBM VM on series z9, SLES10, 4G RAM, Oracle 10.2.0.3 EE, big
endian
Database is just shy of 1 terabyte in size--70 percent of data is in a
single table; total of 212 tables.
12 very large tables, including the monster--most have no column, like a
static date, that can be used to logically break the data into "partitions".

I'm working on procedures to move our production database from the Dell to
the IBM. My tests so far indicate that I'm going to need more than 24
hours. Management, in their infinite wisdom, is insisting that it be done
in less than 8. It will take as long as it takes, but I'm wondering what I
can do to speed things up. So far I've done the following:

exp/imp - too slow overall
plsql that commits at regular intervals, depending on the size of the
table - works very well for all tables under 1M; can load 111 tables in
under 2 hours using 2 concurrent sessions. Works for larger tables, but
obviously takes much longer. I had 2 sessions doing tables under 1M and 2
doing tables between 1M and 100M concurrently. Didn't try for the 12 tables
over 100M.
Direct-path insert - used on the table holding 70 percent of the data.
Four months ago I insisted this table have a static date column added. I
can logically break the data loads down by date--they want the most current
data loaded first, the remainder can be done over a period of days. This is
working reasonably well, but having done this same thing once before on this
table, I know it will take about a month to get all the data moved based on
the constraints I'm working under--can't be done during core business hours,
etc.
I put the target database in noarchivelog mode for my testing. Is this
a wise move for migrating production during the go live?

Manage has suggested that I leave off old data and load it later. Doesn't
work with 95 pecent of the tables because of their structure and foreign key
constraints. They also suggested I use both the primary and the standby
databases to read from. No way to test this until I go live--constraints
again--although this actually was part of my plan from the beginning. Will
too many concurrent sessions loading data slow things down too much? What
would I look at to determine this? 10g is new to me so I'm not familiar
with all the features yet and may be missing something significant.

Any suggestions are appreciated, other than telling management what they can
do with their time constraint. (Already did that.)

Thanks.

Sandy

Search Discussions

  • Howard Latham at Mar 25, 2008 at 12:19 pm
    For our 700gig database I used transportable tablespaces / datapump from
    Windows nt to redhat.
    Worked well once I got the strategy right.
    In my experience get a spare box for a few practices.
    On 25/03/2008, Sandra Becker wrote:

    Source DB: Dell server, RHEL4, 12G RAM, Oracle 64 bit 9.2.0.8 SE1, little
    endian
    Target DB: IBM VM on series z9, SLES10, 4G RAM, Oracle 10.2.0.3 EE, big
    endian
    Database is just shy of 1 terabyte in size--70 percent of data is in a
    single table; total of 212 tables.
    12 very large tables, including the monster--most have no column, like a
    static date, that can be used to logically break the data into "partitions".

    I'm working on procedures to move our production database from the Dell to
    the IBM. My tests so far indicate that I'm going to need more than 24
    hours. Management, in their infinite wisdom, is insisting that it be done
    in less than 8. It will take as long as it takes, but I'm wondering what I
    can do to speed things up. So far I've done the following:

    1) exp/imp - too slow overall
    2) plsql that commits at regular intervals, depending on the size of the
    table - works very well for all tables under 1M; can load 111 tables in
    under 2 hours using 2 concurrent sessions. Works for larger tables, but
    obviously takes much longer. I had 2 sessions doing tables under 1M and 2
    doing tables between 1M and 100M concurrently. Didn't try for the 12 tables
    over 100M.
    3) Direct-path insert - used on the table holding 70 percent of the
    data. Four months ago I insisted this table have a static date column
    added. I can logically break the data loads down by date--they want the
    most current data loaded first, the remainder can be done over a period of
    days. This is working reasonably well, but having done this same thing once
    before on this table, I know it will take about a month to get all the data
    moved based on the constraints I'm working under--can't be done during core
    business hours, etc.
    4) I put the target database in noarchivelog mode for my testing. Is
    this a wise move for migrating production during the go live?

    Manage has suggested that I leave off old data and load it later. Doesn't
    work with 95 pecent of the tables because of their structure and foreign key
    constraints. They also suggested I use both the primary and the standby
    databases to read from. No way to test this until I go live--constraints
    again--although this actually was part of my plan from the beginning. Will
    too many concurrent sessions loading data slow things down too much? What
    would I look at to determine this? 10g is new to me so I'm not familiar
    with all the features yet and may be missing something significant.

    Any suggestions are appreciated, other than telling management what they
    can do with their time constraint. (Already did that.)

    Thanks.

    Sandy
    --
    Howard A. Latham

    --
    http://www.freelists.org/webpage/oracle-l
  • Sandra Becker at Mar 25, 2008 at 1:10 pm
    I was told that in order to use transportable tablespaces, the endians had
    to be the same. Is this incorrect? I'm already practicing and have ruled
    out some possibilities.

    On Tue, Mar 25, 2008 at 6:19 AM, Howard Latham
    wrote:
    For our 700gig database I used transportable tablespaces / datapump from
    Windows nt to redhat.
    Worked well once I got the strategy right.
    In my experience get a spare box for a few practices.


    On 25/03/2008, Sandra Becker wrote:

    Source DB: Dell server, RHEL4, 12G RAM, Oracle 64 bit 9.2.0.8 SE1,
    little endian
    Target DB: IBM VM on series z9, SLES10, 4G RAM, Oracle 10.2.0.3 EE,
    big endian
    Database is just shy of 1 terabyte in size--70 percent of data is in a
    single table; total of 212 tables.
    12 very large tables, including the monster--most have no column, like a
    static date, that can be used to logically break the data into "partitions".

    I'm working on procedures to move our production database from the Dell
    to the IBM. My tests so far indicate that I'm going to need more than 24
    hours. Management, in their infinite wisdom, is insisting that it be done
    in less than 8. It will take as long as it takes, but I'm wondering what I
    can do to speed things up. So far I've done the following:

    1) exp/imp - too slow overall
    2) plsql that commits at regular intervals, depending on the size of
    the table - works very well for all tables under 1M; can load 111 tables in
    under 2 hours using 2 concurrent sessions. Works for larger tables, but
    obviously takes much longer. I had 2 sessions doing tables under 1M and 2
    doing tables between 1M and 100M concurrently. Didn't try for the 12 tables
    over 100M.
    3) Direct-path insert - used on the table holding 70 percent of the
    data. Four months ago I insisted this table have a static date column
    added. I can logically break the data loads down by date--they want the
    most current data loaded first, the remainder can be done over a period of
    days. This is working reasonably well, but having done this same thing once
    before on this table, I know it will take about a month to get all the data
    moved based on the constraints I'm working under--can't be done during core
    business hours, etc.
    4) I put the target database in noarchivelog mode for my testing. Is
    this a wise move for migrating production during the go live?

    Manage has suggested that I leave off old data and load it later.
    Doesn't work with 95 pecent of the tables because of their structure and
    foreign key constraints. They also suggested I use both the primary and the
    standby databases to read from. No way to test this until I go
    live--constraints again--although this actually was part of my plan from the
    beginning. Will too many concurrent sessions loading data slow things down
    too much? What would I look at to determine this? 10g is new to me so I'm
    not familiar with all the features yet and may be missing something
    significant.

    Any suggestions are appreciated, other than telling management what they
    can do with their time constraint. (Already did that.)

    Thanks.

    Sandy


    --
    Howard A. Latham
    --
    http://www.freelists.org/webpage/oracle-l
  • Jost, at Mar 25, 2008 at 1:32 pm

    Am Dienstag, den 25.03.2008, 07:10 -0600 schrieb Sandra Becker:
    I was told that in order to use transportable tablespaces, the endians
    had to be the same. Is this incorrect? I'm already practicing and
    have ruled out some possibilities.
    If you go the way supposed by Mayen, first migrate, than transport,
    you're wrong. Oracle 10g supports Transportable Tablespaces over
    different endianess. Oracle 9i does not.

    See Database Administrator's Guide Chapter 8 Managing Tablespaces. There
    is an example scenario which can help to dig into it.

    So i think, that way is worth a try.

    hth
  • Howard Latham at Mar 25, 2008 at 1:46 pm
    We had major problems and tried many routes. The idea of transportable came
    up at a
    panel at UKOUG in Birmingham. We found that export just died when it tried
    to create very large files, It took a long time because I had to do tests at
    weekends - thats why I suggest a spare box where you can test opening the
    new baby at leisure. yes you have to go to 10g but it works - other methods
    - notabley export do not!, I founds that there was little knowledge at
    Oracle on doing this - but it was a couple of years ago now.
    On 25/03/2008, Jost, Jörg wrote:


    Am Dienstag, den 25.03.2008, 07:10 -0600 schrieb Sandra Becker:
    I was told that in order to use transportable tablespaces, the endians
    had to be the same. Is this incorrect? I'm already practicing and
    have ruled out some possibilities.

    If you go the way supposed by Mayen, first migrate, than transport,
    you're wrong. Oracle 10g supports Transportable Tablespaces over
    different endianess. Oracle 9i does not.

    See Database Administrator's Guide Chapter 8 Managing Tablespaces. There
    is an example scenario which can help to dig into it.

    So i think, that way is worth a try.

    hth



    --
    http://www.freelists.org/webpage/oracle-l

    --
    Howard A. Latham

    --
    http://www.freelists.org/webpage/oracle-l
  • Mark W. Farnham at Mar 25, 2008 at 3:55 pm
    10g2 indeed supports transportable tablespaces across platforms with
    different endian word order. However, the transport is *not* in place and
    the tablespaces must be converted on either the source side or the
    destination side. (That's all in the manual.)

    That has implications for the time and space required for the transport,
    especially as compared to the "in place" transportation possible between
    platforms of the same endian word order, the latter being pretty much
    magically quick compared to about anything else for a shutdown, move it all
    protocol. If you can fit that within a window that does not hurt the
    business it is also probably the most reliable and least costly protocol. If
    you do the conversion on the target side you can minimize the cost and
    overhead on the current production server to facilitate dress rehearsals.

    If you have a long calendar time to execute the move as long as you are not
    down for long for the cut-over, there are various ways to get the data over
    to the target with an as-of date and then apply changes to logically roll
    the data on the target forward to the point where you shut down the "old"
    server for good. One of these methods would be the alternative if you cannot
    fit a "shutdown, move it all protocol" into the allowed time.

    You might also want to consider avoiding tempting fate by adding in a full
    backup on the target to the time required. No one will be having fun if your
    new machine has a hiccup requiring a recovery before you get a target side
    backup and have already performed interactively sourced transactions on the
    target that are not logically logged for replay. The backup time can be
    phased in the "as-of date copy, then catch up protocols" but is sequential
    to the end of the "shutdown, move it all protocol."

    Regards,

    mwf

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Jost, Jörg
    Sent: Tuesday, March 25, 2008 9:32 AM
    To: sbecker6925_at_gmail.com
    Cc: oracle-l
    Subject: Re: Migrating 9i to 10g performance issues

    Am Dienstag, den 25.03.2008, 07:10 -0600 schrieb Sandra Becker:
    I was told that in order to use transportable tablespaces, the endians
    had to be the same. Is this incorrect? I'm already practicing and
    have ruled out some possibilities.
    If you go the way supposed by Mayen, first migrate, than transport,
    you're wrong. Oracle 10g supports Transportable Tablespaces over
    different endianess. Oracle 9i does not.

    See Database Administrator's Guide Chapter 8 Managing Tablespaces. There
    is an example scenario which can help to dig into it.

    So i think, that way is worth a try.

    hth
  • Sandra Becker at Mar 25, 2008 at 4:08 pm
    Thanks, Mark, for the good information. I had forgotten about the
    conversion aspect of transportable. No way I can do it on the source
    side--can't add more disk--questionable on the target side at this point in
    time. We have been running more tests on the larger tables and have found
    that direct-path insert in parallel will significantly cut load time. We
    will continue to test various scenarios over the next 3 weeks. There also
    is some discussion of moving the largest table a month at a
    time--fortunately, it now has a static date--moving the most current months
    first.

    Thanks everyone for all your suggestions.

    Sandy
  • Sandra Becker at Mar 25, 2008 at 1:16 pm
    I did a little more research on transportable tablespaces. Only two of my
    tablespaces are self-contained and they hold very little data so I don't
    think this is a viable option. Also the idea of upgrading then migrating
    has been nixed and will not be considered going forward for various reasons,
    none of which make sense to me at this point.

    Sandy

    On Tue, Mar 25, 2008 at 6:19 AM, Howard Latham
    wrote:
    For our 700gig database I used transportable tablespaces / datapump from
    Windows nt to redhat.
    Worked well once I got the strategy right.
    In my experience get a spare box for a few practices.
    --
    http://www.freelists.org/webpage/oracle-l
  • Howard Latham at Mar 25, 2008 at 1:23 pm
    You transport the whole lot then it doesnt matter about self containment.
    I will try and post my my script ! - pretty horrific !
    On 25/03/2008, Sandra Becker wrote:

    I did a little more research on transportable tablespaces. Only two of my
    tablespaces are self-contained and they hold very little data so I don't
    think this is a viable option. Also the idea of upgrading then migrating
    has been nixed and will not be considered going forward for various reasons,
    none of which make sense to me at this point.

    Sandy

    On Tue, Mar 25, 2008 at 6:19 AM, Howard Latham
    wrote:
    For our 700gig database I used transportable tablespaces / datapump from
    Windows nt to redhat.
    Worked well once I got the strategy right.
    In my experience get a spare box for a few practices.
    --
    Howard A. Latham

    --
    http://www.freelists.org/webpage/oracle-l
  • Howard Latham at Mar 25, 2008 at 1:28 pm
    expdp %USERNAME%/easyeasy
    dumpfile=expdatlive.dnptransport_tablespaces=(USERS,TOOLS,INDX,PVDF_REFDATA,OEM_REPOSITORY,QHRS91_IDX,COMMS2002_REFDATA,HHMASTER_DATA,COMMS,PVDF_DEMOGS_IDX,COIN_STUDY_INDEX,PVDF_IDX,COMMS91_REFDATA,COMMS91,QC_VIEWING_IDX,COMMS2000,COMMS_IDX,QHRS2002,ESTAB_DATA,COMMS2001,ESTAB_IDX,PVDF_WORK_DATA,QC_INDEX,QHRS91_DATA,COMMS91_IDX,PM_CONTACTS_INDEX,HHMASTER_IDX,QC_VIEWING_DATA,PVDF_REFDATA_IDX,PVDF,QHRS91_REFDATA_IDX,QHRS91_REFDATA,
    PM_CONTACTS_DATA, UNDOTBS, SMALL_TABLES, COMMS2002, COMMS2003,

    PVDF_VIEW_IDX, PROGS2002,PROGS_IDX,PVDF_VIEWING_DATA, COMMS2004,

    QC_REPORTS_DATA, COMMS_IDX_2002, COIN_STUDY_DATA, PVDF_WEIGHTS_DATA,

    PVDF_WEIGHTS_INDEX, PVDF_WORK_INDEX,COMMS2005) transport_full_check=Y
    directory=my_dir logfile=my_dir_log
    rem copy h:\oradata\live\*.* V:\H\

    Then copy all the data files

    Once you have done this you use rman to do the convert the datafiles
    On 25/03/2008, Howard Latham wrote:

    You transport the whole lot then it doesnt matter about self containment.
    I will try and post my my script ! - pretty horrific !
    On 25/03/2008, Sandra Becker wrote:

    I did a little more research on transportable tablespaces. Only two of
    my tablespaces are self-contained and they hold very little data so I don't
    think this is a viable option. Also the idea of upgrading then migrating
    has been nixed and will not be considered going forward for various reasons,
    none of which make sense to me at this point.

    Sandy

    On Tue, Mar 25, 2008 at 6:19 AM, Howard Latham
    wrote:
    For our 700gig database I used transportable tablespaces / datapump
    from Windows nt to redhat.
    Worked well once I got the strategy right.
    In my experience get a spare box for a few practices.

    --
    Howard A. Latham
    --
    Howard A. Latham

    --
    http://www.freelists.org/webpage/oracle-l
  • Mayen.Shah_at_lazard.com at Mar 25, 2008 at 12:43 pm
    I would recommend you break your migration in two stages.

    Version migration (9i to 10g)
    Machine migration (Dell to IBM)

    That way you can have two short and manageable downtime (may be over two
    weekends) and process will be easier.
    In place migration from 9i to 10g will be fairly quick and there are many
    options available for migration from Dell to IBM for same version of
    database.

    One advice regarding export/import type of migration ---> DON'T DO
    IT.....

    Thanks
    Mayen

    "Sandra Becker"
    Sent by: oracle-l-bounce_at_freelists.org
    Mar 25 2008 07:56 AM
    Please respond to
    sbecker6925_at_gmail.com

    To
    "oracle-l"
    cc

    Subject
    Migrating 9i to 10g performance issues

    Source DB: Dell server, RHEL4, 12G RAM, Oracle 64 bit 9.2.0.8 SE1, little
    endian
    Target DB: IBM VM on series z9, SLES10, 4G RAM, Oracle 10.2.0.3 EE, big
    endian
    Database is just shy of 1 terabyte in size--70 percent of data is in a
    single table; total of 212 tables.
    12 very large tables, including the monster--most have no column, like a
    static date, that can be used to logically break the data into
    "partitions".


    I'm working on procedures to move our production database from the Dell to
    the IBM. My tests so far indicate that I'm going to need more than 24
    hours. Management, in their infinite wisdom, is insisting that it be done
    in less than 8. It will take as long as it takes, but I'm wondering what
    I can do to speed things up. So far I've done the following:


    exp/imp - too slow overall
    plsql that commits at regular intervals, depending on the size of the
    table - works very well for all tables under 1M; can load 111 tables in
    under 2 hours using 2 concurrent sessions. Works for larger tables, but
    obviously takes much longer. I had 2 sessions doing tables under 1M and 2
    doing tables between 1M and 100M concurrently. Didn't try for the 12
    tables over 100M.
    Direct-path insert - used on the table holding 70 percent of the data.
    Four months ago I insisted this table have a static date column added. I
    can logically break the data loads down by date--they want the most
    current data loaded first, the remainder can be done over a period of
    days. This is working reasonably well, but having done this same thing
    once before on this table, I know it will take about a month to get all
    the data moved based on the constraints I'm working under--can't be done
    during core business hours, etc.
    I put the target database in noarchivelog mode for my testing. Is
    this a wise move for migrating production during the go live?

    Manage has suggested that I leave off old data and load it later. Doesn't
    work with 95 pecent of the tables because of their structure and foreign
    key constraints. They also suggested I use both the primary and the
    standby databases to read from. No way to test this until I go
    live--constraints again--although this actually was part of my plan from
    the beginning. Will too many concurrent sessions loading data slow things
    down too much? What would I look at to determine this? 10g is new to me
    so I'm not familiar with all the features yet and may be missing something
    significant.


    Any suggestions are appreciated, other than telling management what they
    can do with their time constraint. (Already did that.)


    Thanks.


    Sandy
  • Yechiel Adar at Mar 25, 2008 at 1:20 pm
    I think you should look into materialized views.
    Another option to check is a logical standby database.

    I did not worked with either so I do not know if they suitable but I am
    sure others on the list will kill these options if they do not fit your
    environment.

    Adar Yechiel
    Rechovot, Israel

    Sandra Becker wrote:
    Source DB: Dell server, RHEL4, 12G RAM, Oracle 64 bit 9.2.0.8
    <http://9.2.0.8> SE1, little endian
    Target DB: IBM VM on series z9, SLES10, 4G RAM, Oracle 10.2.0.3
    <http://10.2.0.3> EE, big endian
    Database is just shy of 1 terabyte in size--70 percent of data is in a
    single table; total of 212 tables.
    12 very large tables, including the monster--most have no column, like
    a static date, that can be used to logically break the data into
    "partitions".

    I'm working on procedures to move our production database from the
    Dell to the IBM. My tests so far indicate that I'm going to need more
    than 24 hours. Management, in their infinite wisdom, is insisting
    that it be done in less than 8. It will take as long as it takes, but
    I'm wondering what I can do to speed things up. So far I've done the
    following:

    1) exp/imp - too slow overall
    2) plsql that commits at regular intervals, depending on the size of
    the table - works very well for all tables under 1M; can load 111
    tables in under 2 hours using 2 concurrent sessions. Works for larger
    tables, but obviously takes much longer. I had 2 sessions doing
    tables under 1M and 2 doing tables between 1M and 100M concurrently.
    Didn't try for the 12 tables over 100M.
    3) Direct-path insert - used on the table holding 70 percent of the
    data. Four months ago I insisted this table have a static date column
    added. I can logically break the data loads down by date--they want
    the most current data loaded first, the remainder can be done over a
    period of days. This is working reasonably well, but having done this
    same thing once before on this table, I know it will take about a
    month to get all the data moved based on the constraints I'm working
    under--can't be done during core business hours, etc.
    4) I put the target database in noarchivelog mode for my testing. Is
    this a wise move for migrating production during the go live?

    Manage has suggested that I leave off old data and load it later.
    Doesn't work with 95 pecent of the tables because of their structure
    and foreign key constraints. They also suggested I use both the
    primary and the standby databases to read from. No way to test this
    until I go live--constraints again--although this actually was part of
    my plan from the beginning. Will too many concurrent sessions loading
    data slow things down too much? What would I look at to determine
    this? 10g is new to me so I'm not familiar with all the features yet
    and may be missing something significant.

    Any suggestions are appreciated, other than telling management what
    they can do with their time constraint. (Already did that.)

    Thanks.

    Sandy
    --
    http://www.freelists.org/webpage/oracle-l
  • Sandra Becker at Mar 25, 2008 at 1:38 pm
    Thanks, I hadn't thought of a logical standby.
    On Tue, Mar 25, 2008 at 7:20 AM, Yechiel Adar wrote:

    I think you should look into materialized views.
    Another option to check is a logical standby database.

    I did not worked with either so I do not know if they suitable but I am
    sure others on the list will kill these options if they do not fit your
    environment.

    Adar Yechiel
    Rechovot, Israel



    Sandra Becker wrote:

    Source DB: Dell server, RHEL4, 12G RAM, Oracle 64 bit 9.2.0.8 SE1, little
    endian
    Target DB: IBM VM on series z9, SLES10, 4G RAM, Oracle 10.2.0.3 EE, big
    endian
    Database is just shy of 1 terabyte in size--70 percent of data is in a
    single table; total of 212 tables.
    12 very large tables, including the monster--most have no column, like a
    static date, that can be used to logically break the data into "partitions".

    I'm working on procedures to move our production database from the Dell to
    the IBM. My tests so far indicate that I'm going to need more than 24
    hours. Management, in their infinite wisdom, is insisting that it be done
    in less than 8. It will take as long as it takes, but I'm wondering what I
    can do to speed things up. So far I've done the following:

    1) exp/imp - too slow overall
    2) plsql that commits at regular intervals, depending on the size of the
    table - works very well for all tables under 1M; can load 111 tables in
    under 2 hours using 2 concurrent sessions. Works for larger tables, but
    obviously takes much longer. I had 2 sessions doing tables under 1M and 2
    doing tables between 1M and 100M concurrently. Didn't try for the 12 tables
    over 100M.
    3) Direct-path insert - used on the table holding 70 percent of the
    data. Four months ago I insisted this table have a static date column
    added. I can logically break the data loads down by date--they want the
    most current data loaded first, the remainder can be done over a period of
    days. This is working reasonably well, but having done this same thing once
    before on this table, I know it will take about a month to get all the data
    moved based on the constraints I'm working under--can't be done during core
    business hours, etc.
    4) I put the target database in noarchivelog mode for my testing. Is
    this a wise move for migrating production during the go live?

    Manage has suggested that I leave off old data and load it later. Doesn't
    work with 95 pecent of the tables because of their structure and foreign key
    constraints. They also suggested I use both the primary and the standby
    databases to read from. No way to test this until I go live--constraints
    again--although this actually was part of my plan from the beginning. Will
    too many concurrent sessions loading data slow things down too much? What
    would I look at to determine this? 10g is new to me so I'm not familiar
    with all the features yet and may be missing something significant.

    Any suggestions are appreciated, other than telling management what they
    can do with their time constraint. (Already did that.)

    Thanks.

    Sandy
    --
    http://www.freelists.org/webpage/oracle-l
  • Andrew Kerber at Mar 25, 2008 at 2:00 pm
    It can be done with export import. If you use export import, use multiple
    streams, and disable all constraints before the import. Enable the
    constraints when all imports are complete. That includes primary key
    constraints. It takes planning and practice to use that method, but it will
    work. The advantage is that most DBA's have a good understanding of
    exp/imp.

    Datapump will also work. Datapump will run in parallel mode, but it has
    some bugs in 10g.

    If you have blobs/clobs they will be your single largest time consumer. If
    you use export/import do not commit until all the blobs/clobs are imported.
    I have done this a few times, and at one point I wrote something for a mass
    delete using an export (basically, export the rows to keep, truncate the
    table, and import). It was nice since it rebuilt the table and indexes at
    the same time it did the export/import.

    On Tue, Mar 25, 2008 at 6:56 AM, Sandra Becker
    wrote:
    Source DB: Dell server, RHEL4, 12G RAM, Oracle 64 bit 9.2.0.8 SE1, little
    endian
    Target DB: IBM VM on series z9, SLES10, 4G RAM, Oracle 10.2.0.3 EE, big
    endian
    Database is just shy of 1 terabyte in size--70 percent of data is in a
    single table; total of 212 tables.
    12 very large tables, including the monster--most have no column, like a
    static date, that can be used to logically break the data into "partitions".

    I'm working on procedures to move our production database from the Dell to
    the IBM. My tests so far indicate that I'm going to need more than 24
    hours. Management, in their infinite wisdom, is insisting that it be done
    in less than 8. It will take as long as it takes, but I'm wondering what I
    can do to speed things up. So far I've done the following:

    1) exp/imp - too slow overall
    2) plsql that commits at regular intervals, depending on the size of the
    table - works very well for all tables under 1M; can load 111 tables in
    under 2 hours using 2 concurrent sessions. Works for larger tables, but
    obviously takes much longer. I had 2 sessions doing tables under 1M and 2
    doing tables between 1M and 100M concurrently. Didn't try for the 12 tables
    over 100M.
    3) Direct-path insert - used on the table holding 70 percent of the
    data. Four months ago I insisted this table have a static date column
    added. I can logically break the data loads down by date--they want the
    most current data loaded first, the remainder can be done over a period of
    days. This is working reasonably well, but having done this same thing once
    before on this table, I know it will take about a month to get all the data
    moved based on the constraints I'm working under--can't be done during core
    business hours, etc.
    4) I put the target database in noarchivelog mode for my testing. Is
    this a wise move for migrating production during the go live?

    Manage has suggested that I leave off old data and load it later. Doesn't
    work with 95 pecent of the tables because of their structure and foreign key
    constraints. They also suggested I use both the primary and the standby
    databases to read from. No way to test this until I go live--constraints
    again--although this actually was part of my plan from the beginning. Will
    too many concurrent sessions loading data slow things down too much? What
    would I look at to determine this? 10g is new to me so I'm not familiar
    with all the features yet and may be missing something significant.

    Any suggestions are appreciated, other than telling management what they
    can do with their time constraint. (Already did that.)

    Thanks.

    Sandy
    --
    Andrew W. Kerber

    'If at first you dont succeed, dont take up skydiving.'

    --
    http://www.freelists.org/webpage/oracle-l
  • Howard Latham at Mar 25, 2008 at 2:15 pm
    I found that large tables and exp are difficult. also exp/imp means you have
    to rebuild the indexes -
    that aint arf slow - even for a small db!
    On 25/03/2008, Andrew Kerber wrote:

    It can be done with export import. If you use export import, use multiple
    streams, and disable all constraints before the import. Enable the
    constraints when all imports are complete. That includes primary key
    constraints. It takes planning and practice to use that method, but it will
    work. The advantage is that most DBA's have a good understanding of
    exp/imp.

    Datapump will also work. Datapump will run in parallel mode, but it has
    some bugs in 10g.

    If you have blobs/clobs they will be your single largest time consumer.
    If you use export/import do not commit until all the blobs/clobs are
    imported. I have done this a few times, and at one point I wrote something
    for a mass delete using an export (basically, export the rows to keep,
    truncate the table, and import). It was nice since it rebuilt the table and
    indexes at the same time it did the export/import.

    On Tue, Mar 25, 2008 at 6:56 AM, Sandra Becker
    wrote:
    Source DB: Dell server, RHEL4, 12G RAM, Oracle 64 bit 9.2.0.8 SE1,
    little endian
    Target DB: IBM VM on series z9, SLES10, 4G RAM, Oracle 10.2.0.3 EE,
    big endian
    Database is just shy of 1 terabyte in size--70 percent of data is in a
    single table; total of 212 tables.
    12 very large tables, including the monster--most have no column, like a
    static date, that can be used to logically break the data into "partitions".

    I'm working on procedures to move our production database from the Dell
    to the IBM. My tests so far indicate that I'm going to need more than 24
    hours. Management, in their infinite wisdom, is insisting that it be done
    in less than 8. It will take as long as it takes, but I'm wondering what I
    can do to speed things up. So far I've done the following:

    1) exp/imp - too slow overall
    2) plsql that commits at regular intervals, depending on the size of
    the table - works very well for all tables under 1M; can load 111 tables in
    under 2 hours using 2 concurrent sessions. Works for larger tables, but
    obviously takes much longer. I had 2 sessions doing tables under 1M and 2
    doing tables between 1M and 100M concurrently. Didn't try for the 12 tables
    over 100M.
    3) Direct-path insert - used on the table holding 70 percent of the
    data. Four months ago I insisted this table have a static date column
    added. I can logically break the data loads down by date--they want the
    most current data loaded first, the remainder can be done over a period of
    days. This is working reasonably well, but having done this same thing once
    before on this table, I know it will take about a month to get all the data
    moved based on the constraints I'm working under--can't be done during core
    business hours, etc.
    4) I put the target database in noarchivelog mode for my testing. Is
    this a wise move for migrating production during the go live?

    Manage has suggested that I leave off old data and load it later.
    Doesn't work with 95 pecent of the tables because of their structure and
    foreign key constraints. They also suggested I use both the primary and the
    standby databases to read from. No way to test this until I go
    live--constraints again--although this actually was part of my plan from the
    beginning. Will too many concurrent sessions loading data slow things down
    too much? What would I look at to determine this? 10g is new to me so I'm
    not familiar with all the features yet and may be missing something
    significant.

    Any suggestions are appreciated, other than telling management what they
    can do with their time constraint. (Already did that.)

    Thanks.

    Sandy


    --
    Andrew W. Kerber

    'If at first you dont succeed, dont take up skydiving.'
    --
    Howard A. Latham

    --
    http://www.freelists.org/webpage/oracle-l
  • Andrew Kerber at Mar 25, 2008 at 2:19 pm
    I have found that large table exports are much easier than imports, set
    direct=y, do the export on the local server, then copy the export to your
    destination server to do the import. Also, if you build the indexes as a
    separate import job it works better.

    For best performance, here is what I found worked best, Do all of this with
    multiple streams:

    Export the tables, (rows=n) to get your table definitions.
    export rows=y, direct=y to get the data
    Import the table definitions (rows=n, constraints=n, indexes=n).
    Import the tables, constraints=n and indexes=n, rows=y
    Then import again, rows=n indexes=y constraints=y

    On Tue, Mar 25, 2008 at 9:15 AM, Howard Latham
    wrote:
    I found that large tables and exp are difficult. also exp/imp means you
    have to rebuild the indexes -
    that aint arf slow - even for a small db!

    On 25/03/2008, Andrew Kerber wrote:

    It can be done with export import. If you use export import, use
    multiple streams, and disable all constraints before the import. Enable the
    constraints when all imports are complete. That includes primary key
    constraints. It takes planning and practice to use that method, but it will
    work. The advantage is that most DBA's have a good understanding of
    exp/imp.

    Datapump will also work. Datapump will run in parallel mode, but it has
    some bugs in 10g.

    If you have blobs/clobs they will be your single largest time consumer.
    If you use export/import do not commit until all the blobs/clobs are
    imported. I have done this a few times, and at one point I wrote something
    for a mass delete using an export (basically, export the rows to keep,
    truncate the table, and import). It was nice since it rebuilt the table and
    indexes at the same time it did the export/import.

    On Tue, Mar 25, 2008 at 6:56 AM, Sandra Becker
    wrote:
    Source DB: Dell server, RHEL4, 12G RAM, Oracle 64 bit 9.2.0.8 SE1,
    little endian
    Target DB: IBM VM on series z9, SLES10, 4G RAM, Oracle 10.2.0.3 EE,
    big endian
    Database is just shy of 1 terabyte in size--70 percent of data is in a
    single table; total of 212 tables.
    12 very large tables, including the monster--most have no column, like
    a static date, that can be used to logically break the data into
    "partitions".

    I'm working on procedures to move our production database from the
    Dell to the IBM. My tests so far indicate that I'm going to need more than
    24 hours. Management, in their infinite wisdom, is insisting that it be
    done in less than 8. It will take as long as it takes, but I'm wondering
    what I can do to speed things up. So far I've done the following:

    1) exp/imp - too slow overall
    2) plsql that commits at regular intervals, depending on the size of
    the table - works very well for all tables under 1M; can load 111 tables in
    under 2 hours using 2 concurrent sessions. Works for larger tables, but
    obviously takes much longer. I had 2 sessions doing tables under 1M and 2
    doing tables between 1M and 100M concurrently. Didn't try for the 12 tables
    over 100M.
    3) Direct-path insert - used on the table holding 70 percent of the
    data. Four months ago I insisted this table have a static date column
    added. I can logically break the data loads down by date--they want the
    most current data loaded first, the remainder can be done over a period of
    days. This is working reasonably well, but having done this same thing once
    before on this table, I know it will take about a month to get all the data
    moved based on the constraints I'm working under--can't be done during core
    business hours, etc.
    4) I put the target database in noarchivelog mode for my testing. Is
    this a wise move for migrating production during the go live?

    Manage has suggested that I leave off old data and load it later.
    Doesn't work with 95 pecent of the tables because of their structure and
    foreign key constraints. They also suggested I use both the primary and the
    standby databases to read from. No way to test this until I go
    live--constraints again--although this actually was part of my plan from the
    beginning. Will too many concurrent sessions loading data slow things down
    too much? What would I look at to determine this? 10g is new to me so I'm
    not familiar with all the features yet and may be missing something
    significant.

    Any suggestions are appreciated, other than telling management what
    they can do with their time constraint. (Already did that.)

    Thanks.

    Sandy


    --
    Andrew W. Kerber

    'If at first you dont succeed, dont take up skydiving.'



    --
    Howard A. Latham
    --
    Andrew W. Kerber

    'If at first you dont succeed, dont take up skydiving.'

    --
    http://www.freelists.org/webpage/oracle-l
  • Hand, Michael T at Mar 25, 2008 at 3:21 pm
    Andrew,


    Just to clarify, do you export statistics in this scenario, or recreate
    them afterwards.


    Mike Hand

    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Andrew Kerber

    Sent: Tuesday, March 25, 2008 10:20 AM
    To: Howard Latham
    Cc: sbecker6925_at_gmail.com; oracle-l
    Subject: Re: Migrating 9i to 10g performance issues

    I have found that large table exports are much easier than

    imports, set direct=y, do the export on the local server, then copy the
    export to your destination server to do the import. Also, if you build
    the indexes as a separate import job it works better.


    For best performance, here is what I found worked best, Do all
    of this with multiple streams:


    Export the tables, (rows=n) to get your table definitions.
    export rows=y, direct=y to get the data
    Import the table definitions (rows=n, constraints=n, indexes=n).
    Import the tables, constraints=n and indexes=n, rows=y
    Then import again, rows=n indexes=y constraints=y

    On Tue, Mar 25, 2008 at 9:15 AM, Howard Latham

    wrote:


    I found that large tables and exp are difficult. also
    exp/imp means you have to rebuild the indexes -

    that aint arf slow - even for a small db!

    On 25/03/2008, Andrew Kerber
    wrote:

    It can be done with export import. If you use
    export import, use multiple streams, and disable all constraints before
    the import. Enable the constraints when all imports are complete. That
    includes primary key constraints. It takes planning and practice to use
    that method, but it will work. The advantage is that most DBA's have a
    good understanding of exp/imp.


    Datapump will also work. Datapump will run in
    parallel mode, but it has some bugs in 10g.


    If you have blobs/clobs they will be your single
    largest time consumer. If you use export/import do not commit until all
    the blobs/clobs are imported. I have done this a few times, and at one
    point I wrote something for a mass delete using an export (basically,
    export the rows to keep, truncate the table, and import). It was nice
    since it rebuilt the table and indexes at the same time it did the
    export/import.




    On Tue, Mar 25, 2008 at 6:56 AM, Sandra Becker
    wrote:


    Source DB: Dell server, RHEL4, 12G RAM,
    Oracle 64 bit 9.2.0.8 SE1, little endian

    Target DB: IBM VM on series z9,
    SLES10, 4G RAM, Oracle 10.2.0.3 EE, big endian

    Database is just shy of 1 terabyte in
    size--70 percent of data is in a single table; total of 212 tables.

    12 very large tables, including the
    monster--most have no column, like a static date, that can be used to
    logically break the data into "partitions".


    I'm working on procedures to move our
    production database from the Dell to the IBM. My tests so far indicate
    that I'm going to need more than 24 hours. Management, in their
    infinite wisdom, is insisting that it be done in less than 8. It will
    take as long as it takes, but I'm wondering what I can do to speed
    things up. So far I've done the following:


    exp/imp - too slow overall
    plsql that commits at regular
    intervals, depending on the size of the table - works very well for all
    tables under 1M; can load 111 tables in under 2 hours using 2 concurrent
    sessions. Works for larger tables, but obviously takes much longer. I
    had 2 sessions doing tables under 1M and 2 doing tables between 1M and
    100M concurrently. Didn't try for the 12 tables over 100M.
    Direct-path insert - used on the
    table holding 70 percent of the data. Four months ago I insisted this
    table have a static date column added. I can logically break the data
    loads down by date--they want the most current data loaded first, the
    remainder can be done over a period of days. This is working reasonably
    well, but having done this same thing once before on this table, I know
    it will take about a month to get all the data moved based on the
    constraints I'm working under--can't be done during core business hours,
    etc.
    I put the target database in
    noarchivelog mode for my testing. Is this a wise move for migrating
    production during the go live?

    Manage has suggested that I leave off
    old data and load it later. Doesn't work with 95 pecent of the tables
    because of their structure and foreign key constraints. They also
    suggested I use both the primary and the standby databases to read from.
    No way to test this until I go live--constraints again--although this
    actually was part of my plan from the beginning. Will too many
    concurrent sessions loading data slow things down too much? What would
    I look at to determine this? 10g is new to me so I'm not familiar with
    all the features yet and may be missing something significant.


    Any suggestions are appreciated, other
    than telling management what they can do with their time constraint.
    (Already did that.)


    Thanks.


    Sandy

    --
    Andrew W. Kerber

    'If at first you dont succeed, dont take up

    skydiving.'

    --
    Howard A. Latham

    --
    Andrew W. Kerber

    'If at first you dont succeed, dont take up skydiving.'

    --
    http://www.freelists.org/webpage/oracle-l
  • Juan Miranda at Mar 25, 2008 at 3:14 pm
    Hello. Hope this helps you.


    (Target server has less memory than source ?.)



    You must determine the time to move the biggest table with export/import.
    Other tables can be imported simultaneausly.


    No archivelog, of course.
    Reduce checkpoints: Very Very big redolog files (5GB), check
    log_checkpoint_interval, etc.
    Lots of memory assigned to PGA (>1GB) to recreate indexes.
    Import es slow, you must do several imports in parallel !

    Don´t create indexes during import.
    Download ddlWizard and generate all the DLL. These tool will give you a
    script to generate all indexes.
    Split into 2 pieces and execute these 2 pieces at same time.
    Change parallelism for very big index.

    Don´t analyze during import. You will do it later in parallel.

    You can do export to a pipe and import from a pipe SIMULTANEUSLY.
    This script do it and execute 4 export/import



    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    +++++++
    ##!/bin/bash -x
    #


    for tb in `cat tablas_restores.txt`
    do


    echo "Lanzo import en LOCAL: " $tb
    echo mkfifo /oracle10/tmp/$tb.pipeimp
    echo imp file=/tmp/$tb.pipeimp log=/tmp/imp_$tb.log parfile=/tmp/imp.par


    echo "Lanzo export en REMOTO: " $tb
    rsh -l oracle10 nodo1 ". /oracle10/.bash_profile; mkfifo
    /oracle10/tmp/$tb.pipeexp; exp br/br tables=$tb
    file=/oracle10/tmp/$tb.pipeexp owner=br & rsh -l oracle10 nodo2 'cat >
    /oracle10/tmp/$tb.pipeimp' < /oracle10/tmp/$tb.pipeexp" &


    echo -----------------------------------------------------

    echo "Compruebo numero de imports lanzados"
    ps aux |grep imp |grep -v grep > /tmp/import.txt
    imports=`wc -l /tmp/import.txt |awk '{print $1}'`
    echo "numero de import ejecutandose: " $imports


    #more than 4? wait.
    while [ $imports -ge 4 ];
    do
    echo imports $imports mayor que 4 .... wait
    sleep 3
    ps aux |grep imp |grep -v grep > /tmp/import.txt
    imports=`wc -l /tmp/import.txt |awk '{print $1}'`
    echo $imports
    done


    done

    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    exp.par
    USERID=system/pass
    direct=y
    buffer=50000000
    recordlength=65535
    indexes=n
    constraints=n
    statistics=none
    grants=n

    imp.par
    USERID=system/pass
    statistics=none
    fromuser=USER1
    touser=USER1
    buffer=50000000
    constraints=n
    grants=n
    indexes=n
    ignore=Y

    greetings.







    De: oracle-l-bounce_at_freelists.org En
    nombre de Sandra Becker
    Enviado el: martes, 25 de marzo de 2008 12:57
    Para: oracle-l
    Asunto: Migrating 9i to 10g performance issues

    Source DB: Dell server, RHEL4, 12G RAM, Oracle 64 bit 9.2.0.8 SE1, little
    endian
    Target DB: IBM VM on series z9, SLES10, 4G RAM, Oracle 10.2.0.3 EE, big
    endian
    Database is just shy of 1 terabyte in size--70 percent of data is in a
    single table; total of 212 tables.
    12 very large tables, including the monster--most have no column, like a
    static date, that can be used to logically break the data into "partitions".


    I'm working on procedures to move our production database from the Dell to
    the IBM. My tests so far indicate that I'm going to need more than 24
    hours. Management, in their infinite wisdom, is insisting that it be done
    in less than 8. It will take as long as it takes, but I'm wondering what I
    can do to speed things up. So far I've done the following:


    exp/imp - too slow overall
    plsql that commits at regular intervals, depending on the size of the
    table - works very well for all tables under 1M; can load 111 tables in
    under 2 hours using 2 concurrent sessions. Works for larger tables, but
    obviously takes much longer. I had 2 sessions doing tables under 1M and 2
    doing tables between 1M and 100M concurrently. Didn't try for the 12 tables
    over 100M.
    Direct-path insert - used on the table holding 70 percent of the data.
    Four months ago I insisted this table have a static date column added. I
    can logically break the data loads down by date--they want the most current
    data loaded first, the remainder can be done over a period of days. This is
    working reasonably well, but having done this same thing once before on this
    table, I know it will take about a month to get all the data moved based on
    the constraints I'm working under--can't be done during core business hours,
    etc.
    I put the target database in noarchivelog mode for my testing. Is this
    a wise move for migrating production during the go live?

    Manage has suggested that I leave off old data and load it later. Doesn't
    work with 95 pecent of the tables because of their structure and foreign key
    constraints. They also suggested I use both the primary and the standby
    databases to read from. No way to test this until I go live--constraints
    again--although this actually was part of my plan from the beginning. Will
    too many concurrent sessions loading data slow things down too much? What
    would I look at to determine this? 10g is new to me so I'm not familiar
    with all the features yet and may be missing something significant.


    Any suggestions are appreciated, other than telling management what they can
    do with their time constraint. (Already did that.)


    Thanks.


    Sandy

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedMar 25, '08 at 11:56a
activeMar 25, '08 at 4:08p
posts18
users9
websiteoracle.com

People

Translate

site design / logo © 2022 Grokbase