Sat, 09 Jul 2011 10:33:03 +0200, you wrote:
On Sat, 2011-07-09 at 09:25 +0200, Gael Le Mignot wrote:
We are running a PostgreSQL 8.4 database, with two tables containing a
lot (> 1 million) moderatly small rows. It contains some btree indexes,
and one of the two tables contains a gin full-text index.
We noticed that the autovacuum process tend to use a lot of memory,
bumping the postgres process near 1Gb while it's running.
Well, it could be its own memory (see maintenance_work_mem), or shared
memory. So, it's hard to say if it's really an issue or not.
BTW, how much memory do you have on this server? what values are used
for shared_buffers and maintenance_work_mem?
maintenance_work_mem is at 16Mb, shared_buffers at 24Mb.
The server currently has 2Gb, we'll add more to it (it's a VM), but we
would like to be able to make an estimate on how much memory it'll need
for a given rate of INSERT into the table, so we can estimate future
I looked in the documentations, but I didn't find the information : do
you know how to estimate the memory required for the autovacuum if we
increase the number of rows ? Is it linear ? Logarithmic ?
It should use up to maintenance_work_mem. Depends on how much memory you
set on this parameter.
So, it shouldn't depend on data size ? Is there a fixed multiplicative
factor between maintenance_work_mem and the memory actually used ?
Also, is there a way to reduce that memory usage ?
Reduce maintenance_work_mem. Of course, if you do that, VACUUM could
take a lot longer to execute.
Would running the autovacuum more frequently lower its memory usage ?
Thanks, we'll try that.
Gaël Le Mignot - email@example.com
Pilot Systems - 9, rue Desargues - 75011 Paris
Tel : +33 1 44 53 05 55 - www.pilotsystems.net
Gérez vos contacts et vos newsletters : www.cockpit-mailing.com