Attached is an updated version of the 'hint bit cache'.
*) 'bucket' was changed to 'page' everywhere
*) rollup array is now gets added during 'set', not the 'get' (pretty
dumb the way it was before -- wasn't really dealing with non-commit
*) more source comments, including a description of the cache in the intro
*) now caching 'invalid' bits.
I went back and forth several times whether to store invalid bits in
the same cache, a separate cache, or not at all. I finally settled
upon storing them in the same cache which has some pros and cons. It
makes it more or less exactly like the clog cache (so I could
copy/pasto some code out from there), but adds some overhead because 2
bit addressing is more expensive than 1 bit addressing -- this is
showing up in profiling...i'm upping the estimate of cpu bound scan
overhead from 1% to 2%. Still fairly cheap, but i'm running into the
edge of where I can claim the cache is 'free' for most workloads --
any claim is worthless without real world testing though. Of course,
if tuple hint bits are set or PD_ALL_VISIBLE is set, you don't have to
pay that price.
*) Haven't touched any satisfies routine besides
HeapTupleSatisfiesMVCC (should they be?)
*) Haven't pushed the cache data into CacheMemoryContext. I figure
this is the way to go, but requires extra 'if' on every cache 'get'.
*) Didn't abstract the clog bit addressing macros. I'm leaning on not
doing this, but maybe they should be. My reasoning is that there is
no requirement for hint bit cache that pages should be whatever block
size is, and I'd like to reserve the ability to adjust the cache page
I'd like to know if this is a strategy that merits further work...If
anybody has time/interest that is. It's getting close to the point
where I can just post it to the commit fest for review. In
particular, I'm concerned if Tom's earlier objections can be
satisfied. If not, it's back to the drawing board...