FAQ
To borrow the style from some MetaLink notes--

Fact: Oracle 10.2.0.1.0
Fact: RedHat 3
Fact: 3 node RAC

Many concurrent sessions inserting one row at a time into the same
table. On first run, table in ASSM tablespace, about 50% of database
service time was gc buffer busy waits. Built a new tablespace with
manual segment management, re-created table with three freelist groups
and 4 freelists. Next run showed >90% of service time was enq: HW
contention; total run time was slightly higher than it was with ASSM..
Re-created table again, allocated three 4 GB extents, one for each
instance. Current run is still showing high HW contention waits.

Might I have too many freelists? Other than forcing all sessions to use
the same RAC instance, is there anything else I can do to reduce these
waits?

Thanks,

Paul Baumgartel
CREDIT SUISSE

Information Technology
Prime Services Databases Americas
One Madison Avenue
New York, NY 10010
USA

Phone 212.538.1143
paul.baumgartel_at_credit-suisse.com
www.credit-suisse.com

Please access the attached hyperlink for an important electronic communications disclaimer:

http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html

Search Discussions

  • Tim Gorman at Jun 25, 2008 at 8:03 pm
    Paul,

    Nice diagnosis on the "gc buffer busy wait" waits. Performance
    tuning, especially on RAC, is like peeling an onion. You've peeled
    away the first layer, now the next is "enq: HW contention"...

    Try this...

    ALTER TABLE ... ALLOCATE EXTENT (INSTANCE 1);
    ALTER TABLE ... ALLOCATE EXTENT (INSTANCE 2);
    ALTER TABLE ... ALLOCATE EXTENT (INSTANCE 3);

    IIRC, using freelist groups implies that the entire extent is assigned
    to one of the freelist groups upon allocation. The used to be a
    parameter called GC_FILES_TO_LOCKS that would do things more
    granularly, but I'm sure it's deprecated by now. Anyway, automatic
    allocation of extents makes this assignment to freelist groups and
    instances a matter of rolling dice, so it might become necessary to
    pre-allocate extents for each instance.

    Try that and see if the high-water mark contention enqueue goes down?

    Of course, there will be more layers to the onion under this. I'm
    betting that, since it sounds like you're inserting (and committing)
    one row at a time, it'll eventually come down to "log file sync" that
    proves the ultimate limitation on performance, as the single-row
    commits drive the each instance's LGWR process nuts. Just my $0.02...

    Hope this helps!

    -Tim

    Quoting "Baumgartel, Paul" :
    To borrow the style from some MetaLink notes--

    Fact: Oracle 10.2.0.1.0
    Fact: RedHat 3
    Fact: 3 node RAC

    Many concurrent sessions inserting one row at a time into the same
    table. On first run, table in ASSM tablespace, about 50% of database
    service time was gc buffer busy waits. Built a new tablespace with
    manual segment management, re-created table with three freelist groups
    and 4 freelists. Next run showed >90% of service time was enq: HW
    contention; total run time was slightly higher than it was with ASSM..
    Re-created table again, allocated three 4 GB extents, one for each
    instance. Current run is still showing high HW contention waits.

    Might I have too many freelists? Other than forcing all sessions to use
    the same RAC instance, is there anything else I can do to reduce these
    waits?

    Thanks,

    Paul Baumgartel
    CREDIT SUISSE
    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA
    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com



    ==============================================================================
    Please access the attached hyperlink for an important electronic
    communications disclaimer:

    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
    ==============================================================================
    --
    http://www.freelists.org/webpage/oracle-l
  • Baumgartel, Paul at Jun 25, 2008 at 8:15 pm
    Tim,

    I'm way ahead of you!;-) Take another look at my post; I did
    pre-allocate one large extent per instance. Unfortunately, it didn't
    help.

    I'm pretty sure that commits are batched; they're sending a batch of
    insert statements from Java and probably committing after each batch.
    Transaction rates are only about 1.5 per second.

    If I get down to log file sync, I'll be happy. The block contention is
    what's driving me nuts. And it occurs to me that if I restrict the
    inserts to one instance, I'll just get buffer busy waits instead of gc
    buffer busy waits.

    Help!

    Thanks for your reply.

    Paul Baumgartel
    CREDIT SUISSE

    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA

    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com

    -----Original Message-----
    From: Tim Gorman
    Sent: Wednesday, June 25, 2008 4:04 PM
    To: Baumgartel, Paul
    Cc: oracle-l_at_freelists.org
    Subject: Re: Insert contention on RAC

    Paul,

    Nice diagnosis on the "gc buffer busy wait" waits. Performance
    tuning, especially on RAC, is like peeling an onion. You've peeled
    away the first layer, now the next is "enq: HW contention"...

    Try this...

    ALTER TABLE ... ALLOCATE EXTENT (INSTANCE 1);
    ALTER TABLE ... ALLOCATE EXTENT (INSTANCE 2);
    ALTER TABLE ... ALLOCATE EXTENT (INSTANCE 3);

    IIRC, using freelist groups implies that the entire extent is assigned
    to one of the freelist groups upon allocation. The used to be a
    parameter called GC_FILES_TO_LOCKS that would do things more
    granularly, but I'm sure it's deprecated by now. Anyway, automatic
    allocation of extents makes this assignment to freelist groups and
    instances a matter of rolling dice, so it might become necessary to
    pre-allocate extents for each instance.

    Try that and see if the high-water mark contention enqueue goes down?

    Of course, there will be more layers to the onion under this. I'm
    betting that, since it sounds like you're inserting (and committing)
    one row at a time, it'll eventually come down to "log file sync" that
    proves the ultimate limitation on performance, as the single-row
    commits drive the each instance's LGWR process nuts. Just my $0.02...

    Hope this helps!

    -Tim

    Quoting "Baumgartel, Paul" :
    To borrow the style from some MetaLink notes--

    Fact: Oracle 10.2.0.1.0
    Fact: RedHat 3
    Fact: 3 node RAC

    Many concurrent sessions inserting one row at a time into the same
    table. On first run, table in ASSM tablespace, about 50% of database
    service time was gc buffer busy waits. Built a new tablespace with
    manual segment management, re-created table with three freelist groups
    and 4 freelists. Next run showed >90% of service time was enq: HW
    contention; total run time was slightly higher than it was with ASSM..
    Re-created table again, allocated three 4 GB extents, one for each
    instance. Current run is still showing high HW contention waits.

    Might I have too many freelists? Other than forcing all sessions to use
    the same RAC instance, is there anything else I can do to reduce these
    waits?

    Thanks,

    Paul Baumgartel
    CREDIT SUISSE
    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA
    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com



    Please access the attached hyperlink for an important electronic
    communications disclaimer:

    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
    >

    Please access the attached hyperlink for an important electronic communications disclaimer:

    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
  • K Gopalakrishnan at Jun 25, 2008 at 8:50 pm
    Paul,
    If I get down to log file sync, I'll be happy. The block contention is
    what's driving me nuts. And it occurs to me that if I restrict the
    inserts to one instance, I'll just get buffer busy waits instead of gc
    buffer busy waits.
    This is quite interesting. What you see is a simple block contention
    effect amplified by the RAC.

    Now to get deeper in to this one..

    Is that table partitioned? Are the waits are for segment headers or
    for data blocks?
    Is the table indexed? What is the contention distribution b/w
    tables/indexes.. You can query V$segment_statistics
    Do you have any ITL waits/Service ITL waits for that table (b.
    V$segment_stats)
    Are there any sequential values involved in this business transaction?

    --
    Best Regards,
    K Gopalakrishnan
    Co-Author: Oracle Wait Interface, Oracle Press 2004
    http://www.amazon.com/exec/obidos/tg/detail/-/007222729X/

    Author: Oracle Database 10g RAC Handbook, Oracle Press 2006
    http://www.amazon.com/gp/product/007146509X/
    --
    http://www.freelists.org/webpage/oracle-l
  • Tim Gorman at Jun 25, 2008 at 9:06 pm
    Ouch! Sorry didn't see that; thanks for pointing it out...

    In that case, I'm wondering if you have an index on this table
    populated with a
    monotonically-ascending data value (i.e. SYSDATE or a sequence
    NEXTVAL). If so,
    freelists certainly wouldn't help you...

    Quoting "Baumgartel, Paul" :
    Tim,

    I'm way ahead of you!;-) Take another look at my post; I did
    pre-allocate one large extent per instance. Unfortunately, it didn't
    help.

    I'm pretty sure that commits are batched; they're sending a batch of
    insert statements from Java and probably committing after each batch.
    Transaction rates are only about 1.5 per second.

    If I get down to log file sync, I'll be happy. The block contention is
    what's driving me nuts. And it occurs to me that if I restrict the
    inserts to one instance, I'll just get buffer busy waits instead of gc
    buffer busy waits.

    Help!

    Thanks for your reply.
    --
    http://www.freelists.org/webpage/oracle-l
  • Baumgartel, Paul at Jun 25, 2008 at 9:15 pm
    Tim,

    No indexes at all; they're built after data population.

    I also didn't mention that each run starts with a newly-created table,
    thus the heavy HWM contention.

    Roman Podshivalov had a promising idea: insert junk data beforehand to
    raise the HWM, then delete it and run the real thing. There are several
    million rows, so a DELETE might be slow enough to erase any time savings
    in the run itself. I thought of TRUNCATE...REUSE STORAGE, but the docs
    aren't clear about HWM; they say HWM is reset if there's more than one
    freelist, but don't mention what happens if there's only one freelist.
    Guess I'll just have to keep testing.

    Thanks to everyone for their helpful suggestions.

    Paul Baumgartel
    CREDIT SUISSE

    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA

    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com

    -----Original Message-----
    From: Tim Gorman
    Sent: Wednesday, June 25, 2008 5:05 PM
    To: Baumgartel, Paul
    Cc: oracle-l_at_freelists.org
    Subject: RE: Insert contention on RAC

    Ouch! Sorry didn't see that; thanks for pointing it out...

    In that case, I'm wondering if you have an index on this table
    populated with a monotonically-ascending data value (i.e. SYSDATE or a
    sequence NEXTVAL). If so, freelists certainly wouldn't help you...

    Quoting "Baumgartel, Paul" :
    Tim,

    I'm way ahead of you!;-) Take another look at my post; I did
    pre-allocate one large extent per instance. Unfortunately, it didn't
    help.

    I'm pretty sure that commits are batched; they're sending a batch of
    insert statements from Java and probably committing after each batch.
    Transaction rates are only about 1.5 per second.

    If I get down to log file sync, I'll be happy. The block contention is
    what's driving me nuts. And it occurs to me that if I restrict the
    inserts to one instance, I'll just get buffer busy waits instead of gc
    buffer busy waits.

    Help!

    Thanks for your reply.


    Paul Baumgartel
    CREDIT SUISSE
    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA
    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com


    -----Original Message-----
    From: Tim Gorman
    Sent: Wednesday, June 25, 2008 4:04 PM
    To: Baumgartel, Paul
    Cc: oracle-l_at_freelists.org
    Subject: Re: Insert contention on RAC

    Paul,

    Nice diagnosis on the "gc buffer busy wait" waits. Performance
    tuning, especially on RAC, is like peeling an onion. You've peeled
    away the first layer, now the next is "enq: HW contention"...

    Try this...

    ALTER TABLE ... ALLOCATE EXTENT (INSTANCE 1);
    ALTER TABLE ... ALLOCATE EXTENT (INSTANCE 2);
    ALTER TABLE ... ALLOCATE EXTENT (INSTANCE 3);
    IIRC, using freelist groups implies that the entire extent is assigned
    to one of the freelist groups upon allocation. The used to be a
    parameter called GC_FILES_TO_LOCKS that would do things more
    granularly, but I'm sure it's deprecated by now. Anyway, automatic
    allocation of extents makes this assignment to freelist groups and
    instances a matter of rolling dice, so it might become necessary to
    pre-allocate extents for each instance.

    Try that and see if the high-water mark contention enqueue goes down?

    Of course, there will be more layers to the onion under this. I'm
    betting that, since it sounds like you're inserting (and committing)
    one row at a time, it'll eventually come down to "log file sync" that
    proves the ultimate limitation on performance, as the single-row
    commits drive the each instance's LGWR process nuts. Just my $0.02...

    Hope this helps!

    -Tim



    Quoting "Baumgartel, Paul" :
    To borrow the style from some MetaLink notes--

    Fact: Oracle 10.2.0.1.0
    Fact: RedHat 3
    Fact: 3 node RAC

    Many concurrent sessions inserting one row at a time into the same
    table. On first run, table in ASSM tablespace, about 50% of database
    service time was gc buffer busy waits. Built a new tablespace with
    manual segment management, re-created table with three freelist
    groups
    and 4 freelists. Next run showed >90% of service time was enq: HW
    contention; total run time was slightly higher than it was with
    ASSM..
    Re-created table again, allocated three 4 GB extents, one for each
    instance. Current run is still showing high HW contention waits.

    Might I have too many freelists? Other than forcing all sessions to use
    the same RAC instance, is there anything else I can do to reduce
    these
    waits?

    Thanks,

    Paul Baumgartel
    CREDIT SUISSE
    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA
    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com


    ======
    Please access the attached hyperlink for an important electronic
    communications disclaimer:

    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
    ======



    Please access the attached hyperlink for an important electronic
    communications disclaimer:

    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html

    Please access the attached hyperlink for an important electronic communications disclaimer:

    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
  • Rajeev Prabhakar at Jun 25, 2008 at 9:28 pm
    Paul,

    What is the extent size for the table which is experiencing such waits ?
    Perhaps, small extent size is the underlying reason ?

    Rajeev

    On Wed, Jun 25, 2008 at 5:15 PM, Baumgartel, Paul
    wrote:
    Tim,

    No indexes at all; they're built after data population.

    I also didn't mention that each run starts with a newly-created table,
    thus the heavy HWM contention.

    Roman Podshivalov had a promising idea: insert junk data beforehand to
    raise the HWM, then delete it and run the real thing. There are several
    million rows, so a DELETE might be slow enough to erase any time savings
    in the run itself. I thought of TRUNCATE...REUSE STORAGE, but the docs
    aren't clear about HWM; they say HWM is reset if there's more than one
    freelist, but don't mention what happens if there's only one freelist.
    Guess I'll just have to keep testing.

    Thanks to everyone for their helpful suggestions.


    Paul Baumgartel
    CREDIT SUISSE
    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA
    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com
    --
    http://www.freelists.org/webpage/oracle-l
  • Mark W. Farnham at Jun 25, 2008 at 9:31 pm
    You can decouple from the time of the run by inflating a table of a
    different name and using rename. You can mess with pctfree and pctused so
    you can inflate the table with one row per block, alter pctfree and pctused,
    and then run the delete.

    If the target table has its own tablespace you can drop the tablespace and
    re-transport in the tablespace with the pre-inflated empty table.

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Baumgartel, Paul
    Sent: Wednesday, June 25, 2008 5:16 PM
    To: Tim Gorman
    Cc: oracle-l_at_freelists.org
    Subject: RE: Insert contention on RAC

    Tim,

    No indexes at all; they're built after data population.

    I also didn't mention that each run starts with a newly-created table,
    thus the heavy HWM contention.

    Roman Podshivalov had a promising idea: insert junk data beforehand to
    raise the HWM, then delete it and run the real thing. There are several
    million rows, so a DELETE might be slow enough to erase any time savings
    in the run itself. I thought of TRUNCATE...REUSE STORAGE, but the docs
    aren't clear about HWM; they say HWM is reset if there's more than one
    freelist, but don't mention what happens if there's only one freelist.
    Guess I'll just have to keep testing.

    Thanks to everyone for their helpful suggestions.

    Paul Baumgartel
    CREDIT SUISSE

    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA

    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com

    -----Original Message-----
    From: Tim Gorman
    Sent: Wednesday, June 25, 2008 5:05 PM
    To: Baumgartel, Paul
    Cc: oracle-l_at_freelists.org
    Subject: RE: Insert contention on RAC

    Ouch! Sorry didn't see that; thanks for pointing it out...

    In that case, I'm wondering if you have an index on this table
    populated with a monotonically-ascending data value (i.e. SYSDATE or a
    sequence NEXTVAL). If so, freelists certainly wouldn't help you...

    Quoting "Baumgartel, Paul" :
    Tim,

    I'm way ahead of you!;-) Take another look at my post; I did
    pre-allocate one large extent per instance. Unfortunately, it didn't
    help.

    I'm pretty sure that commits are batched; they're sending a batch of
    insert statements from Java and probably committing after each batch.
    Transaction rates are only about 1.5 per second.

    If I get down to log file sync, I'll be happy. The block contention is
    what's driving me nuts. And it occurs to me that if I restrict the
    inserts to one instance, I'll just get buffer busy waits instead of gc
    buffer busy waits.

    Help!

    Thanks for your reply.


    Paul Baumgartel
    CREDIT SUISSE
    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA
    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com


    -----Original Message-----
    From: Tim Gorman
    Sent: Wednesday, June 25, 2008 4:04 PM
    To: Baumgartel, Paul
    Cc: oracle-l_at_freelists.org
    Subject: Re: Insert contention on RAC

    Paul,

    Nice diagnosis on the "gc buffer busy wait" waits. Performance
    tuning, especially on RAC, is like peeling an onion. You've peeled
    away the first layer, now the next is "enq: HW contention"...

    Try this...

    ALTER TABLE ... ALLOCATE EXTENT (INSTANCE 1);
    ALTER TABLE ... ALLOCATE EXTENT (INSTANCE 2);
    ALTER TABLE ... ALLOCATE EXTENT (INSTANCE 3);
    IIRC, using freelist groups implies that the entire extent is assigned
    to one of the freelist groups upon allocation. The used to be a
    parameter called GC_FILES_TO_LOCKS that would do things more
    granularly, but I'm sure it's deprecated by now. Anyway, automatic
    allocation of extents makes this assignment to freelist groups and
    instances a matter of rolling dice, so it might become necessary to
    pre-allocate extents for each instance.

    Try that and see if the high-water mark contention enqueue goes down?

    Of course, there will be more layers to the onion under this. I'm
    betting that, since it sounds like you're inserting (and committing)
    one row at a time, it'll eventually come down to "log file sync" that
    proves the ultimate limitation on performance, as the single-row
    commits drive the each instance's LGWR process nuts. Just my $0.02...

    Hope this helps!

    -Tim



    Quoting "Baumgartel, Paul" :
    To borrow the style from some MetaLink notes--

    Fact: Oracle 10.2.0.1.0
    Fact: RedHat 3
    Fact: 3 node RAC

    Many concurrent sessions inserting one row at a time into the same
    table. On first run, table in ASSM tablespace, about 50% of database
    service time was gc buffer busy waits. Built a new tablespace with
    manual segment management, re-created table with three freelist
    groups
    and 4 freelists. Next run showed >90% of service time was enq: HW
    contention; total run time was slightly higher than it was with
    ASSM..
    Re-created table again, allocated three 4 GB extents, one for each
    instance. Current run is still showing high HW contention waits.

    Might I have too many freelists? Other than forcing all sessions to use
    the same RAC instance, is there anything else I can do to reduce
    these
    waits?

    Thanks,

    Paul Baumgartel
    CREDIT SUISSE
    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA
    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com


    ======
    Please access the attached hyperlink for an important electronic
    communications disclaimer:

    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
    ======



    Please access the attached hyperlink for an important electronic
    communications disclaimer:

    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html

    ==
    Please access the attached hyperlink for an important electronic
    communications disclaimer:

    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html

    ==
  • Tim Gorman at Jun 25, 2008 at 9:44 pm
    Dang Mark! We're going to have to rename you "MacGyver".....;-)

    Quoting "Mark W. Farnham" :
    You can decouple from the time of the run by inflating a table of a
    different name and using rename. You can mess with pctfree and pctused so
    you can inflate the table with one row per block, alter pctfree and pctused,
    and then run the delete.

    If the target table has its own tablespace you can drop the tablespace and
    re-transport in the tablespace with the pre-inflated empty table.
    --
    http://www.freelists.org/webpage/oracle-l
  • Bobak, Mark at Jun 25, 2008 at 9:59 pm
    Another option would be:
    Insert one row into table
    Alter table minimize records_per_block;
    Insert as many rows as you'll need. (You'll get one block per row inserted.)
    Delete from;
    Alter table nominimize records_per_block;

    That way, you don't have to mess w/ pctused/pctfree calculations. You'll get exactly one row per block. It's just a little quicker/cleaner way to extend the table to the size you need.

    -Mark

    --
    Mark J. Bobak
    Senior Database Administrator, System & Product Technologies
    ProQuest
    789 E. Eisenhower, Parkway, P.O. Box 1346
    Ann Arbor MI 48106-1346
    +1.734.997.4059 or +1.800.521.0600 x 4059
    mark.bobak_at_proquest.com
    www.proquest.com
    www.csa.com

    ProQuest...Start here.

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org On Behalf Of Tim Gorman
    Sent: Wednesday, June 25, 2008 5:45 PM
    To: mwf_at_rsiz.com
    Cc: paul.baumgartel_at_credit-suisse.com; oracle-l_at_freelists.org
    Subject: RE: Insert contention on RAC

    Dang Mark! We're going to have to rename you "MacGyver".....;-)

    Quoting "Mark W. Farnham" :
    You can decouple from the time of the run by inflating a table of a
    different name and using rename. You can mess with pctfree and pctused so
    you can inflate the table with one row per block, alter pctfree and pctused,
    and then run the delete.

    If the target table has its own tablespace you can drop the tablespace and
    re-transport in the tablespace with the pre-inflated empty table.
    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Mark W. Farnham at Jun 25, 2008 at 10:45 pm
    If you can figure out a way to choose the table name to insert into from the
    instance you have happened to connect to for a particular session (and
    possibly more conditions like modolo on session number if you need to fan
    out wider than the number of instances you have), then you can have a table
    of tables for each instance, avoiding all the gc stuff, and then do a Gorman
    with the tables after the load is complete.

    (A Gorman is where you totally avoid contention with an existing big
    partitioned table being affected by dataloads by doing a partition exchange
    after the data is loaded. So this is a modified Gorman, in that you'll be
    exchanging in the n tables to something that was formerly empty.)

    (If Tim doesn't like me calling that a Gorman, I'll withdraw it.)

    Finally, since you've added the information that you start empty each time
    and create the indexes after the load, I *suspect* that you simply do not
    use that data for anything until the load is complete. If that is true you
    might be better off having each insert session create a sequence generator
    named table for its inserts, insert the table name in a job control list,
    and then build a little utility to automate doing the Gorman when all the
    sessions are complete (or copy append them all to the destination if you
    don't have partitioning). If this is followed up with postprocessing jobs,
    knowing the affinity between partitions and instances *might* facilitate
    logically parallel processing (as opposed to parallel degree) on each
    partition separately with essentially no gc traffic.

    mwf

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Tim Gorman
    Sent: Wednesday, June 25, 2008 5:45 PM
    To: mwf_at_rsiz.com
    Cc: paul.baumgartel_at_credit-suisse.com; oracle-l_at_freelists.org
    Subject: RE: Insert contention on RAC

    Dang Mark! We're going to have to rename you "MacGyver".....;-)

    Quoting "Mark W. Farnham" :
    You can decouple from the time of the run by inflating a table of a
    different name and using rename. You can mess with pctfree and pctused so
    you can inflate the table with one row per block, alter pctfree and pctused,
    and then run the delete.

    If the target table has its own tablespace you can drop the tablespace and
    re-transport in the tablespace with the pre-inflated empty table.
    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l

    application/ms-tnef attachment: winmail.dat
  • Mark W. Farnham at Jun 25, 2008 at 9:25 pm
    You might need more freelists. Other than pre-reserving a few extra blocks
    which might sometimes drive new extents sooner that otherwise required,
    extra freelists won't hurt you. You quoted "many" concurrent sessions
    inserting. Let's say that "many" is as low as 48 and for easy math let's say
    they are 16 per node. Then on average 4 inserters are competing for space on
    each free list on each node. When you bump the end of that freelist, one
    session is going to be first to grab more space for the freelist and the
    other three have a chance to collide. I'm trying to remember if they get
    interrupted in the attempt to add blocks to that freelist or if they check
    again after the enqueue to see if more space is already there in the
    freelist the session has in hand. (Things are usually moving too fast to
    observe if the freelist gets bumped more than it has to get bumped.)



    The maximum freelists value depends on your blocksize, and if you have a lot
    of concurrent inserters you probably want as big a value as is allowed.



    Pre-allocating extents for each instance is a good idea. I trust you got the
    syntax right, or else we'd probably still be seeing some gc wait as parts of
    each extent are doled dynamically from the master freelist instead of being
    assigned to a particular instance's freelist group.



    Now there is also the issue of indexes and partitioning, and whether, rather
    than forcing all sessions to one instance, you could establish affinity to
    particular instances based on some knowledge with respect to a session that
    predicts which partition the session will insert into. For example, if you
    have a notion of timezones and use 1 plus the remainder of the greenwich
    offset hour divided by 3 to pick an instance, then you would phase your
    activity nicely as you follow the clock (likely peaktimes in many
    applications, though of course not all). Of course the east coast and the
    west coast fall to the same partition by this scheme so you might want to
    use something else if most of your action is on the coasts. If all your
    sessions originate in the same timezone this is obviously worthless, but I
    hope it gives you an idea of the sort of thing that MAY exist as both a
    useful instance connection router and partition insert key. When this can be
    done the system operates more like 3 separate databases as regards insert
    performance, and that is a good thing. If you do this, then in your extent
    allocation you'll want to skew the allocation of extents heavily to the
    instance likely to get the inserts for each partition.



    Now let's say there is a way to make, say, 10 partitions per instance with
    some useful session key. Then you'd have 30 times more chances to avoid HW
    contention than you currently do.



    Even if you can't establish instance affinity, you could use balanced
    allocation freelist groups for each partition if there is some reasonable
    way to partition your data. The downside tradeoff is if that will frequently
    mean multi-partition queries in an awkward way that runs slower later. Often
    it actually can be faster, but that is a texture of use question, balancing
    the potentially enormous wins on partition pruning with the overhead of the
    need to assemble the results from many partitions.



    Finally, unless your application functional requirements dictate the one row
    at a time thing, I'd pretty much stand on my head to batch them up.



    Good luck,



    mwf







    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Baumgartel, Paul
    Sent: Wednesday, June 25, 2008 3:19 PM
    To: oracle-l_at_freelists.org
    Subject: Insert contention on RAC



    To borrow the style from some MetaLink notes--

    Fact: Oracle 10.2.0.1.0
    Fact: RedHat 3
    Fact: 3 node RAC

    Many concurrent sessions inserting one row at a time into the same table.
    On first run, table in ASSM tablespace, about 50% of database service time
    was gc buffer busy waits. Built a new tablespace with manual segment
    management, re-created table with three freelist groups and 4 freelists.
    Next run showed >90% of service time was enq: HW contention; total run time
    was slightly higher than it was with ASSM.. Re-created table again,
    allocated three 4 GB extents, one for each instance. Current run is still
    showing high HW contention waits.

    Might I have too many freelists? Other than forcing all sessions to use the
    same RAC instance, is there anything else I can do to reduce these waits?

    Thanks,

    Paul Baumgartel
    CREDIT SUISSE

    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA

    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com



    ==
    Please access the attached hyperlink for an important electronic
    communications disclaimer:


    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html

    ==
  • Tim Gorman at Jun 25, 2008 at 11:09 pm

    Quoting "Mark W. Farnham" :

    (A Gorman is where you totally avoid contention with an existing big
    partitioned table being affected by dataloads by doing a partition exchange
    after the data is loaded. So this is a modified Gorman, in that you'll be
    exchanging in the n tables to something that was formerly empty.)

    (If Tim doesn't like me calling that a Gorman, I'll withdraw it.)
    ...well, it's better than the previous definition of a "Gorman", which
    involves a bone-headed move resulting in serious injury that makes
    people cringe on the replay...

    ...and I don't even want to think about the previous definition of a
    "modified Gorman", except that it makes dogs start howling...
  • Anjo Kolk at Jun 26, 2008 at 2:46 pm
    It used to be like this the the freelists number * 5 = number of blocks
    allocated when bumping the HWM. However in OPS it would be capped to the
    number of blocks that were covered BL locks (assigned with gc_* parameters).
    But the fact you see a buch of HW enqueue contention tells me the the HWM is
    not bumped with 5*nr freelist blocks. In fact now with dynamic BL locking
    there should be a 1 to 1 relateion between lock and block. Dump some
    redo and you can see the bumping of the HWM with the number of blocks. I
    assume it is 1 or 5 and not the high number that you want.

    Anjo.
    On 6/25/08, Baumgartel, Paul wrote:

    To borrow the style from some MetaLink notes--

    Fact: Oracle 10.2.0.1.0
    Fact: RedHat 3
    Fact: 3 node RAC

    Many concurrent sessions inserting one row at a time into the same table.
    On first run, table in ASSM tablespace, about 50% of database service time
    was gc buffer busy waits. Built a new tablespace with manual segment
    management, re-created table with three freelist groups and 4 freelists.
    Next run showed >90% of service time was enq: HW contention; total run time
    was slightly higher than it was with ASSM.. Re-created table again,
    allocated three 4 GB extents, one for each instance. Current run is still
    showing high HW contention waits.

    Might I have too many freelists? Other than forcing all sessions to use
    the same RAC instance, is there anything else I can do to reduce these
    waits?

    Thanks,

    *Paul Baumgartel*
    *CREDIT SUISSE*
    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA
    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    ***www.credit-suisse.com*

    ==============================================================================
    Please access the attached hyperlink for an important electronic communications disclaimer:
    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
    ==============================================================================
    --
    Anjo Kolk
    Owner and Founder OraPerf Projects
    tel: +31-577-712000
    mob: +31-6-55340888

    --
    http://www.freelists.org/webpage/oracle-l

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedJun 25, '08 at 7:18p
activeJun 26, '08 at 2:46p
posts14
users7
websiteoracle.com

People

Translate

site design / logo © 2022 Grokbase