FAQ
Hello all,

I have inherited several Oracle 9.2.0.8 data warehouses on HP-UX 11.11 servers, each 2+ TBs. All of them are experiencing various performance issues.

The db_multiblock_read_count (MBRC) on these databases are explicitly set to the value of 8. To try to get a better idea of what we might increase the MBRC to, I wanted to figure out the values for the formula:

[stripe width] x [stripe size] / [db_block_size ]

How can I determine the values for stripe width and stripe size on HP-UX 11.11?

I thought perhaps the lvdisplay command might have the info. Looking at typical output, I see that there is a Stripe Size. Would Lv Size correspond to the stripe width?

$ lvdisplay /dev/vgdb/lvol1
--- Logical volumes ---

LV Name /dev/vgdb/lvol1
VG Name /dev/vgdb
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule striped
LV Size (Mbytes) 1198080
Current LE 4680
Allocated PE 4680
Stripes 6
Stripe Size (Kbytes) 1024
Bad block NONE
Allocation strict
IO Timeout (Seconds) default

I also looked at the vgdisplay command, part of which I have included below. Is there something here that is the equivalent of stripe size and stripe width?

Volume groups ---
VG Name /dev/vgdb
VG Write Access read/write
VG Status available, exclusive
Max LV 16
Cur LV 5
Open LV 5
Max PV 255
Cur PV 36
Act PV 36
Max PE per PV 65535
VGDA 72

PE Size (Mbytes) 256
Total PE 14364
Alloc PE 14364
Free PE 0
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0

Logical volumes ---
LV Name /dev/vgdb/lvol1
LV Status available/syncd
LV Size (Mbytes) 1198080
Current LE 4680
Allocated PE 4680
Used PV 12
....
....

I realize that perhaps the best way to determine the optimal MBRC value with these file systesm would be to use the dd command and write a large datafile with different block size options to the filesystems. These database though have been up and running for quite a while and it's not possible to quiesce the systems and run benchmarks.

If anyone could confirm what I should be looking at with lgdisplay or vgdisplay, or can suggest another command, it would be greatly appreciated.

Thanks,
Lou Avrami

Search Discussions

  • Greg Rahn at Sep 4, 2008 at 10:11 pm

    On Thu, Sep 4, 2008 at 1:49 PM, Lou Avrami wrote:
    I have inherited several Oracle 9.2.0.8 data warehouses on HP-UX 11.11 servers, each 2+ TBs. All of them are experiencing various performance issues.
    What metrics do you have that points to I/O being an problem?
    The db_multiblock_read_count (MBRC) on these databases are explicitly set to the value of 8.
    This might be a bit small for a warehouse, but it completely depends
    on the query workload. Do the majority of the queries use index
    access or table scans? Adjusting the MBRC will effect the costing of
    *all* plans, but it will only benefit the I/O sizes for those doing
    partition/index fast full/table scans. Be careful what you optimize
    for...if your storage and I/O channels can not handle more I/O from
    more FTS then there will likely be little benefit in adjusting it
    larger. In fact, if your I/O is bottlenecked, it may get worse.
    To try to get a better idea of what we might increase the MBRC to, I wanted to figure out the values for the formula:
    [stripe width] x [stripe size] / [db_block_size ]
    This looks like a great equation but I don't think it is is what you
    want. Generally it is better to have the largest database I/O match
    the stripe depth/size (how much of the stripe is on a single physical
    disk) so that a single large (say 1MB) I/O is serviced by exactly one
    physical disk.

    I would recommend seeing the Oracle docs:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/iodesign.htm#i28412

    The other thing to consider is all the different layers that
    change/chop I/O size. This includes kernel, I/O driver, storage
    array, raid controller, etc. The size I/O that you think you are
    sending may not be what the physical disk actually gets. This is only
    visible with storage array tools.
    I realize that perhaps the best way to determine the optimal MBRC value with these file systesm would be to use the dd command and write a large datafile with different block size options to the filesystems.
    Actually the best way to determine it is to run some FTS and alter
    MBRC at the session level comparing the elapsed times. Just make sure
    that you are doing 100% physical I/O (no blocks from buffer cache or
    filesystem cache). Then you will see if it is worth your time. My
    guess is probably not.

    I would recommend your spend your time looking at execution plans and
    determining if you have optimal partitioning. The fastest I/O is the
    one that never takes place (e.g. partition elimination). These will
    likely give you orders of magnitude gains vs. a few percentage points
    of gains tweaking MBRC. Go after the big fish!

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedSep 4, '08 at 8:49p
activeSep 4, '08 at 10:11p
posts2
users2
websiteoracle.com

2 users in discussion

Greg Rahn: 1 post Lou Avrami: 1 post

People

Translate

site design / logo © 2022 Grokbase