I am thinking we should scale it based on max_fsm_relations.


Tom Lane wrote:
Awhile back, there was a discussion about psql \d display being really
slow in a database with 4000 tables:

I looked into this some, and it seems that part of the problem is that
the catalog caches are limited to hold no more than 5000 tuples in total
(MAXCCTUPLES in catcache.c). When you're looking at thousands of
tables, the tuples you want age out of cache immediately and so every
SearchSysCache call has to actually go to disk (or at least shared

The 5000 setting is basically Berkeley-era (when we got the code from
Berkeley, the limit was 300 tuples per cache, which I changed to a
system-wide limit some years ago). A reasonable guess is that we're
talking about entries of 100 to 200 bytes each, making the maximum
catcache space usage a megabyte or less (per backend, that is). Seems
like this is mighty small on modern machines.

We could make a GUC variable to control this, or just arbitrarily bump
up the constant to 50000 or so. Is it worth another GUC? Not sure.

There's another hardwired setting right next to this one, which is the
number of hash buckets per catcache. Since the caches are likely to
have significantly different sizes, that is looking a bit obsolete too.
We could put per-cache numbers into the setup lists in syscache.c
easily. Making the cache sizes dynamic a la dynahash.c is another
possibility, but I'm unsure if it's worth the trouble.


regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster
Bruce Momjian http://candle.pha.pa.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

Search Discussions

Discussion Posts


Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 2 of 8 | next ›
Discussion Overview
grouppgsql-hackers @
postedJun 14, '06 at 11:25p
activeJun 15, '06 at 1:32p



site design / logo © 2018 Grokbase