On 22 June 2013 21:40, Stephen Frost wrote:
I'm actually not a huge fan of this as it's certainly not cheap to do. If it
can be shown to be better than an improved heuristic then perhaps it would
work but I'm not convinced.
I'm actually not a huge fan of this as it's certainly not cheap to do. If it
can be shown to be better than an improved heuristic then perhaps it would
work but I'm not convinced.
* an initial heuristic to overestimate the number of buckets when we
have sufficient memory to do so
* a heuristic to determine whether it is cheaper to rebuild a dense
hash table into a better one.
Although I like Heikki's rebuild approach we can't do this every x2
overstretch. Given large underestimates exist we'll end up rehashing
5-12 times, which seems bad. Better to let the hash table build and
then re-hash once, it we can see it will be useful.
OK?
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services