I wanted to find out what the average multiblock I/O size. Based on
bytes = "physical read bytes"
total data blocks read = bytes / database block size
single block reads = "physical read IO requests" - "physical read total
multi block requests"
blocks read via multi-block reads = total data blocks read - single block
reads
It seems like
average multi block read size =blocks read via multi-block reads / multi
block read requests
( "physical read bytes"/ (db block size))- ("physical read
total IO requests" -
" physical read total multi block requests"))/ " physical read total multi
block requests"
but wondering if this is right and it seems a little convoluted.
My end goal is to match up I/O latencies off disk with I/O wait events.
Sometimes "db file sequential reads" are slow, sometimes it's the "db file
scattered reads" and sometime its "direct path read". If each of these has
different I/O sizes then with dtrace I can match up the I/O sizes and
latencies to the wait event. I'm thinking it's better to just read the I/O
sizes out of Active Session History per I/O wait type. These I/O sizes will
be skewed to the higher sized, but at least should show the max size and
give some idea about size variation.
PS anyone know what kinds of I/O is included in "physical read total bytes"
- seems to include data block reads, control file reads and redo reads
Kyle
http://dboptimizer.com