On Tue, Mar 23, 2010 at 5:28 PM, Bhella Paramjeet-PFCW67 wrote:
Thanks for replying.
Can you explain what you mean by increase the number of threads or how I can increase the number of threads? I just have 2 tables that are very active. I am using postgres version 8.2.7 and 3510 storagetek array with 10 disks in raid 1+0.
Sure, if you psql into your db and do:
and hit tab a couple times you'll see a list of all these
configuration options. The one we're looking for is
autovacuum_max_workers. Setting this to something higher will allow
that many threads to run at once. While 6 or 8 threads at 5 or 10
milliseconds delay is ok on a bigger RAID array, it'll kill the perf
of a machine with a pair of disks in a RAID-1. As you drop the
cost_delay, you can no longer run as many threads without starving
your machine of IO. It's a good idea to keep track of how many vacuum
threads you're usually running and how long they run for
(pg_stat_activity can shed some light there).
What you're trying to do is get enough threads running so any large
tables that take a long time (half hour or more maybe) to vacuum don't
get in the way of all the other tables getting vacuumed too. If
you've got 2 really big tables and the rest aren't so big, then three
threads is likely plenty. If you've got 40 really big tables that
take a long time to get vacuumed, then you might need more than just 3
Use iostat -x 10 /dev/sdx
to monitor the db arrays to see how much IO you're utilizing with x
autovac threads running, then increase it and keep an eye on it, all
while running under a fairly steady production load. If going from 3
to 4 threads makes the average utilization go up by 5% then you have
an idea how much each thread is costing you.
Changing cost_delay OR cost_limit will directly affect how much IO is
getting thrown to autovacuum. Lower values of cost_delay and higher
values of cost_limit will cost you more in terms of autovacuum daemon
using up IO. You really don't want it using up too much of your IO.
I shoot for something in the 10% range max, maybe a bit more.
Then you want to keep an eye on table bloat to see if vacuum is
keeping up. If it's falling behind vacuum verbose as superuser will
give you an idea.
From: Scott Marlowe
Sent: Tuesday, March 23, 2010 2:42 PM
To: Bhella Paramjeet-PFCW67
Cc: firstname.lastname@example.org; email@example.com
Subject: Re: [ADMIN] tuning auto vacuum for highly active tables
On Tue, Mar 23, 2010 at 2:54 PM, Bhella Paramjeet-PFCW67
We have a postgres database in which couple of tables get bloated due to
heavy inserts and deletes. Auto vacuum is running. My question is how can I
make auto vacuum more aggressive? I am thinking of enabling
autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit parameters.
Can anyone suggest how to calculate the appropriate values for these
parameters and if there are any side effects of enabling these parameters.
Any help will be highly appreciated.
OK, autovacuum runs x number of threads, and these threads can have
their IO impact limited by cost delay and cost limit.
Your first choice is based a lot on your db needs. If you have a lot
of large tables that all need to be vacuumed a lot, then you might
want to first increase the number of threads before making any of them
more aggressive. Then you might want to make the vacuums more
aggressive by lower cost_delay down from 20 to 10 or 5 or so
On our servers we run 6 threads with a cost_delay of 3 or 4
milliseconds, and autovacuum keeps up without getting in the way. We
have a decent DAS array, so we can handle a lot of vacuum threads
running at once before they become an issue.
The smaller your disk set, the less you can throw vacuum at it and not
expect it to mess up the rest of the app. It's all a trade off, but
if you don't have more than a disk or two to throw at your db don't
expect vacuum to keep up with really heavy activity without impacting
your system's performance.
When fascism comes to America, it will be intolerance sold as diversity.