FAQ

On 22 June 2013 21:40, Stephen Frost wrote:

I'm actually not a huge fan of this as it's certainly not cheap to do. If it
can be shown to be better than an improved heuristic then perhaps it would
work but I'm not convinced.
We need two heuristics, it would seem:

* an initial heuristic to overestimate the number of buckets when we
have sufficient memory to do so

* a heuristic to determine whether it is cheaper to rebuild a dense
hash table into a better one.

Although I like Heikki's rebuild approach we can't do this every x2
overstretch. Given large underestimates exist we'll end up rehashing
5-12 times, which seems bad. Better to let the hash table build and
then re-hash once, it we can see it will be useful.

OK?

--
  Simon Riggs http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 8 of 24 | next ›
Discussion Overview
grouppgsql-hackers @
categoriespostgresql
postedJun 22, '13 at 1:15p
activeJul 3, '13 at 8:09a
posts24
users6
websitepostgresql.org...
irc#postgresql

People

Translate

site design / logo © 2017 Grokbase