FAQ
Hi All,



We have a postgres database in which couple of tables get bloated due to
heavy inserts and deletes. Auto vacuum is running. My question is how
can I make auto vacuum more aggressive? I am thinking of enabling
autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit
parameters. Can anyone suggest how to calculate the appropriate values
for these parameters and if there are any side effects of enabling these
parameters. Any help will be highly appreciated.



Thanks

Paramjeet Kaur

Search Discussions

  • Scott Marlowe at Mar 23, 2010 at 9:42 pm

    On Tue, Mar 23, 2010 at 2:54 PM, Bhella Paramjeet-PFCW67 wrote:
    Hi All,



    We have a postgres database in which couple of tables get bloated due to
    heavy inserts and deletes. Auto vacuum is running. My question is  how can I
    make auto vacuum more aggressive? I am thinking of enabling
    autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit parameters.
    Can anyone suggest how to calculate the appropriate values for these
    parameters and if there are any side effects of enabling these parameters.
    Any help will be highly appreciated.
    OK, autovacuum runs x number of threads, and these threads can have
    their IO impact limited by cost delay and cost limit.

    Your first choice is based a lot on your db needs. If you have a lot
    of large tables that all need to be vacuumed a lot, then you might
    want to first increase the number of threads before making any of them
    more aggressive. Then you might want to make the vacuums more
    aggressive by lower cost_delay down from 20 to 10 or 5 or so
    milliseconds.

    On our servers we run 6 threads with a cost_delay of 3 or 4
    milliseconds, and autovacuum keeps up without getting in the way. We
    have a decent DAS array, so we can handle a lot of vacuum threads
    running at once before they become an issue.

    The smaller your disk set, the less you can throw vacuum at it and not
    expect it to mess up the rest of the app. It's all a trade off, but
    if you don't have more than a disk or two to throw at your db don't
    expect vacuum to keep up with really heavy activity without impacting
    your system's performance.
  • Bhella Paramjeet-PFCW67 at Mar 23, 2010 at 11:28 pm
    Hi Scott,

    Thanks for replying.
    Can you explain what you mean by increase the number of threads or how I can increase the number of threads? I just have 2 tables that are very active. I am using postgres version 8.2.7 and 3510 storagetek array with 10 disks in raid 1+0.

    Thanks
    Paramjeet Kaur

    -----Original Message-----
    From: Scott Marlowe
    Sent: Tuesday, March 23, 2010 2:42 PM
    To: Bhella Paramjeet-PFCW67
    Cc: [email protected]; [email protected]
    Subject: Re: [ADMIN] tuning auto vacuum for highly active tables

    On Tue, Mar 23, 2010 at 2:54 PM, Bhella Paramjeet-PFCW67
    wrote:
    Hi All,



    We have a postgres database in which couple of tables get bloated due to
    heavy inserts and deletes. Auto vacuum is running. My question is  how can I
    make auto vacuum more aggressive? I am thinking of enabling
    autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit parameters.
    Can anyone suggest how to calculate the appropriate values for these
    parameters and if there are any side effects of enabling these parameters.
    Any help will be highly appreciated.
    OK, autovacuum runs x number of threads, and these threads can have
    their IO impact limited by cost delay and cost limit.

    Your first choice is based a lot on your db needs. If you have a lot
    of large tables that all need to be vacuumed a lot, then you might
    want to first increase the number of threads before making any of them
    more aggressive. Then you might want to make the vacuums more
    aggressive by lower cost_delay down from 20 to 10 or 5 or so
    milliseconds.

    On our servers we run 6 threads with a cost_delay of 3 or 4
    milliseconds, and autovacuum keeps up without getting in the way. We
    have a decent DAS array, so we can handle a lot of vacuum threads
    running at once before they become an issue.

    The smaller your disk set, the less you can throw vacuum at it and not
    expect it to mess up the rest of the app. It's all a trade off, but
    if you don't have more than a disk or two to throw at your db don't
    expect vacuum to keep up with really heavy activity without impacting
    your system's performance.
  • Scott Marlowe at Mar 23, 2010 at 11:59 pm

    On Tue, Mar 23, 2010 at 5:28 PM, Bhella Paramjeet-PFCW67 wrote:
    Hi Scott,

    Thanks for replying.
    Can you explain what you mean by increase the number of threads or how I can increase the number of threads? I just have 2 tables that are very active. I am using postgres version 8.2.7 and 3510 storagetek array with 10 disks in raid 1+0.
    Sure, if you psql into your db and do:

    show autovac

    and hit tab a couple times you'll see a list of all these
    configuration options. The one we're looking for is
    autovacuum_max_workers. Setting this to something higher will allow
    that many threads to run at once. While 6 or 8 threads at 5 or 10
    milliseconds delay is ok on a bigger RAID array, it'll kill the perf
    of a machine with a pair of disks in a RAID-1. As you drop the
    cost_delay, you can no longer run as many threads without starving
    your machine of IO. It's a good idea to keep track of how many vacuum
    threads you're usually running and how long they run for
    (pg_stat_activity can shed some light there).

    What you're trying to do is get enough threads running so any large
    tables that take a long time (half hour or more maybe) to vacuum don't
    get in the way of all the other tables getting vacuumed too. If
    you've got 2 really big tables and the rest aren't so big, then three
    threads is likely plenty. If you've got 40 really big tables that
    take a long time to get vacuumed, then you might need more than just 3
    threads.

    Use iostat -x 10 /dev/sdx

    to monitor the db arrays to see how much IO you're utilizing with x
    autovac threads running, then increase it and keep an eye on it, all
    while running under a fairly steady production load. If going from 3
    to 4 threads makes the average utilization go up by 5% then you have
    an idea how much each thread is costing you.

    Changing cost_delay OR cost_limit will directly affect how much IO is
    getting thrown to autovacuum. Lower values of cost_delay and higher
    values of cost_limit will cost you more in terms of autovacuum daemon
    using up IO. You really don't want it using up too much of your IO.
    I shoot for something in the 10% range max, maybe a bit more.

    Then you want to keep an eye on table bloat to see if vacuum is
    keeping up. If it's falling behind vacuum verbose as superuser will
    give you an idea.

    Thanks
    Paramjeet Kaur

    -----Original Message-----
    From: Scott Marlowe
    Sent: Tuesday, March 23, 2010 2:42 PM
    To: Bhella Paramjeet-PFCW67
    Cc: [email protected]; [email protected]
    Subject: Re: [ADMIN] tuning auto vacuum for highly active tables

    On Tue, Mar 23, 2010 at 2:54 PM, Bhella Paramjeet-PFCW67
    wrote:
    Hi All,



    We have a postgres database in which couple of tables get bloated due to
    heavy inserts and deletes. Auto vacuum is running. My question is  how can I
    make auto vacuum more aggressive? I am thinking of enabling
    autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit parameters.
    Can anyone suggest how to calculate the appropriate values for these
    parameters and if there are any side effects of enabling these parameters.
    Any help will be highly appreciated.
    OK, autovacuum runs x number of threads, and these threads can have
    their IO impact limited by cost delay and cost limit.

    Your first choice is based a lot on your db needs.  If you have a lot
    of large tables that all need to be vacuumed a lot, then you might
    want to first increase the number of threads before making any of them
    more aggressive.  Then you might want to make the vacuums more
    aggressive by lower cost_delay down from 20 to 10 or 5 or so
    milliseconds.

    On our servers we run 6 threads with a cost_delay of 3 or 4
    milliseconds, and autovacuum keeps up without getting in the way.  We
    have a decent DAS array, so we can handle a lot of vacuum threads
    running at once before they become an issue.

    The smaller your disk set, the less you can throw vacuum at it and not
    expect it to mess up the rest of the app.  It's all a trade off, but
    if you don't have more than a disk or two to throw at your db don't
    expect vacuum to keep up with really heavy activity without impacting
    your system's performance.


    --
    When fascism comes to America, it will be intolerance sold as diversity.
  • Alvaro Herrera at Mar 24, 2010 at 3:20 am

    Scott Marlowe escribió:
    On Tue, Mar 23, 2010 at 5:28 PM, Bhella Paramjeet-PFCW67
    wrote:
    Hi Scott,

    Thanks for replying.
    Can you explain what you mean by increase the number of threads or how I can increase the number of threads? I just have 2 tables that are very active. I am using postgres version 8.2.7 and 3510 storagetek array with 10 disks in raid 1+0.
    Sure, if you psql into your db and do:

    show autovac

    and hit tab a couple times you'll see a list of all these
    configuration options. The one we're looking for is
    autovacuum_max_workers. Setting this to something higher will allow
    that many threads to run at once. While 6 or 8 threads at 5 or 10
    milliseconds delay is ok on a bigger RAID array, it'll kill the perf
    of a machine with a pair of disks in a RAID-1. As you drop the
    cost_delay, you can no longer run as many threads without starving
    your machine of IO. It's a good idea to keep track of how many vacuum
    threads you're usually running and how long they run for
    (pg_stat_activity can shed some light there).
    Hmm, keep in mind that having more workers means that each one of them
    increments its cost_delay so that the total is roughly what you
    configured.

    Also, keep in mind that max_workers is a new setting in 8.3. Since the
    OP is running 8.2, he can only get one "worker". Presumable he needs to
    disable autovac for those two very active tables and setup a cron job to
    process them in their own schedule.

    --
    Alvaro Herrera http://www.CommandPrompt.com/
    The PostgreSQL Company - Command Prompt, Inc.
  • Alvaro Herrera at Mar 24, 2010 at 3:23 am

    Alvaro Herrera escribió:

    Also, keep in mind that max_workers is a new setting in 8.3. Since the
    OP is running 8.2, he can only get one "worker". Presumable he needs to
    disable autovac for those two very active tables and setup a cron job to
    process them in their own schedule.
    Err, sorry, "she", not "he".

    --
    Alvaro Herrera http://www.CommandPrompt.com/
    The PostgreSQL Company - Command Prompt, Inc.
  • Szu-Ching Peckner at Mar 23, 2010 at 9:47 pm
    we ran into the same problem, had big table, played with vacuum cost and
    delay, but can't shrink too much because of heavy insert and delete.
    we ended up with using slony for upgrade, also have data copy from fresh
    because of inital replication to shrink our large table, with minimum
    controlled downtime.

    Bhella Paramjeet-PFCW67 wrote:
    Hi All,



    We have a postgres database in which couple of tables get bloated due
    to heavy inserts and deletes. Auto vacuum is running. My question is
    how can I make auto vacuum more aggressive? I am thinking of enabling
    autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit
    parameters. Can anyone suggest how to calculate the appropriate values
    for these parameters and if there are any side effects of enabling
    these parameters. Any help will be highly appreciated.



    Thanks

    Paramjeet Kaur
  • Szu-Ching Peckner at Mar 23, 2010 at 9:57 pm
    we ran into the same problem, had big table, played with vacuum cost and
    delay, but can't shrink too much because of heavy insert and delete.
    we ended up with using slony for upgrade, also have data copy from fresh
    because of inital replication to shrink our large table, with minimum
    controlled downtime.



    Bhella Paramjeet-PFCW67 wrote:
    Hi All,



    We have a postgres database in which couple of tables get bloated due
    to heavy inserts and deletes. Auto vacuum is running. My question is
    how can I make auto vacuum more aggressive? I am thinking of enabling
    autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit
    parameters. Can anyone suggest how to calculate the appropriate values
    for these parameters and if there are any side effects of enabling
    these parameters. Any help will be highly appreciated.



    Thanks

    Paramjeet Kaur

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppgsql-admin @
categoriespostgresql
postedMar 23, '10 at 9:01p
activeMar 24, '10 at 3:23a
posts8
users4
websitepostgresql.org
irc#postgresql

People

Translate

site design / logo © 2023 Grokbase