During btvacuumscan(), we lock the index for extension and then wait
to acquire a cleanup lock on the last page. Then loop until we find a
point where the index has not expanded again during our wait for lock
on that last page. On a busy index this can take some time, especially
when people regularly access data with the highest values in the

The comments there say "It is critical that we visit all leaf pages,
including ones added after we start the scan, else we might fail to
delete some deletable tuples."

What seems strange is that we make no attempt to check whether we have
already identified all tuples being removed by the VACUUM. We have the
number of dead tuples we are looking for and we track the number of
tuples we have deleted from the index, so we could easily make this
check early and avoid waiting.

Can we avoid scanning all pages once we have proven we have all dead tuples?

Simon Riggs                   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Search Discussions

Discussion Posts

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 1 of 2 | next ›
Discussion Overview
grouppgsql-hackers @
postedAug 3, '11 at 8:44p
activeAug 3, '11 at 9:57p

2 users in discussion

Tom Lane: 1 post Simon Riggs: 1 post



site design / logo © 2021 Grokbase