imagine if you have the following storage subsystem constructed:
(ignore caching effects for this little thought experiment)
volA: 3 disk RAID 5 volume mounted over single pathed 1 Gbps Fibre, 32
KB stripe size
volB: 8 disk RAID 10 volume mounted over dual pathed 2 Gbps Fibre, 256
KB stripe size
Imagine that you have 2 datafiles per tablespace, an LMT tablespace
with uniform size of say 1MB and have 2 GB of data loaded in it, 1 GB
in 1 datafile mounted on volA, 1 GB in 1 datafile in volB.
When you view the io waits by tablespace, they will appear to be
mediocre based upon the averaging of the superior performance of the
IO to volB hampered by the incredibly awful performance of volA.
When you view the io waits by datafile, you can see the difference in
the io characteristics (on average) by datafile. IO waits would likely
be under 2 ms for the RAID 10 vol, around 20 ms for the RAID 5 volume.
Throughput would be proportionally similar.
You then recite some explitive about RAID 5, move all the datafiles to
volB and swear off ever discussing RAID 5 again.
This little comment will probably get me tossed out of BAARF. so be it.
On Wed, 23 Feb 2005 16:46:18 -0800 (PST), Sai Selvaganesan
there are two sections in statspack for IO related information.
under load profile
- physical and physical writes per second
under tablespace IO /File IO
- avg reads/sec and avg writes/sec
what is the difference between the above and which one is related to IOPs
# f=ma, divide by 1, convert to moles.