Hello,

Okay, I have some troubles trying to determine how to most efficiently
store a database which will contain a couple of huge tables (think
5bil+ rows). These tables each have a bigint id and a character
varying value. Now, I'm currently partitioning these tables based on
the hashtext (value) % 1000, to determine which subtable a certain
value should be stored in.

However, I often also need to find a value for an id; instead of using
the sequential numbering that a BIGSERIAL would provide, I am
thinking: wouldn't it make some kind of sense if I used the value of
hashtext('value') to determine the id ? Then, if I need to determine
the value that belongs to a certain id, I can just % 1000 the value
and know which subtable the value is stored in, reducing the amount of
tables to search with a factor 500.

Now, my question is: how big is the chance that a collision happens
between hashes ? I noticed that the function only returns a 32 bit
number, so I figure it must be at least once in the 4 billion values.
If this approach is not recommended (using hashes as keys), any other
suggestions on how to make the subtable name derivable from an
identification number ?

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppgsql-general @
categoriespostgresql
postedApr 12, '07 at 3:59a
activeApr 12, '07 at 3:59a
posts1
users1
websitepostgresql.org
irc#postgresql

1 user in discussion

Leon Mergen: 1 post

People

Translate

site design / logo © 2021 Grokbase