"DBCC SHOWCONTIG determines whether the table is heavily fragmented.
Table fragmentation occurs through the process of data
modifications
(INSERT, UPDATE, and DELETE statements) made against the table.
Because these modifications are not ordinarily distributed equally
among the rows of the table, the fullness of each page can vary over
time. For queries that scan part or all of a table, such table
fragmentation can cause additional page reads. This hinders
parallel scanning of data."

But that's measuring something else I think. That's not
looking at how the pages are physically mapped on disk, but
at how tuples are spread across pages.. Maybe in sqlserver
tuples can span pages?
I don't beleive they can (except for IMAGE and TEXT data, which is
handled like our TOAST data).

That said, it returns two numbers:
Scan Density, which shows how many more pages it needs to hit than what
would be ideal
and
Locical Scan Fragmentation, which shows the percentage of "out-of-order
pages" it hits. And isn't "out-of-order pages" exactly what file system
fragmentation would leave us? The difference between the physical page
and the logical page location.

Given that they preallocate files, they only have this kind of
fragmentation at one level. Since we don't, we can have this both inside
the file and in the fliesystem. But it's still the same thing, isn't it?


//Magnus

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

People

Translate

site design / logo © 2022 Grokbase