Attached is a patch that tracks and displays the accumulated cost when
autovacuum is running. Code by Noah Misch and myself. I hope this idea
will bring a formal process to vacuum tuning, which is currently too
hard to do. I was about to add "without..." to that, but I then
realized it needs no disclaimer; it's just too hard, period. Vacuum
issues are enemy #1 at all the terabyte scale customer sites I've been
fighting with lately.

The patch updates the command string just before the workers sleep to
show how much work they've done so far. And at the end, it adds a few
new lines to the information written to the logs, when the autovacuum is
notable enough to be logged at all. The overhead it adds is at most a
few integer operations per buffer processed and a slower title string
update once per sleep. It's trivial compared to both the vacuum itself,
and to the instrumentation's value to sites with vacuum issues.

To demonstrate the patch in action, here's a test case using a 6.4GB
pgbench_accounts table:

$ createdb pgbench
$ pgbench -i -s 500 pgbench
$ psql -d pgbench -c "select pg_relation_size('pgbench_accounts');"
pg_relation_size
------------------
6714761216
$ psql -d pgbench -c "select relname,relpages from pg_class where
relname='pgbench_accounts';"
relname | relpages
------------------+----------
pgbench_accounts | 819673
$psql -d pgbench -c "delete from pgbench_accounts where aid<20000000"

You can see the new information in the command string with ps and grep:

$ while [ 1 ] ; do (ps -eaf | grep "[a]utovacuum worker" && sleep 60) ; done
gsmith 2687 17718 0 15:44 ? 00:00:00 postgres: autovacuum
worker process h=19 m=14196 d=14185
...
gsmith 2687 17718 0 15:44 ? 00:00:09 postgres: autovacuum
worker process h=182701 m=301515 d=321345
...
gsmith 2687 17718 1 15:44 ? 00:00:23 postgres: autovacuum
worker process h=740359 m=679987 d=617559
...

That's accumulated hit/miss/dirty counts, the raw numbers. When the
autovacuum is finished, those totals appear as a new line in the log entry:

LOG: automatic vacuum of table "pgbench.public.pgbench_accounts": index
scans: 1
pages: 0 removed, 819673 remain
tuples: 19999999 removed, 30000022 remain
buffer usage: 809537 hits, 749340 misses, 686660 dirtied
system usage: CPU 5.70s/19.73u sec elapsed 2211.60 sec

To check if this makes sense, we need the relevant parameters at the
time, which were the defaults (I only tweaked some basic config bits
here, including shared_buffers=400MB so a bit more was cached):

vacuum_cost_page_hit = 1 # 0-10000 credits
vacuum_cost_page_miss = 10 # 0-10000 credits
vacuum_cost_page_dirty = 20 # 0-10000 credits
vacuum_cost_limit = 200 # 1-10000 credits
autovacuum_vacuum_cost_delay = 20ms

Every 20ms equals 50 times/second. That means the cost accumulation
should be 200 * 50 = 10000 / second, or 600K/minute. That's how fast
the cost should be increasing here. Given a runtime of 2211.60 seconds,
that's a total estimated cost of 2209.15 * 10000 = 22,091,500. Now we
check that against the totals printed at the end of the vacuum:

1 * 809537 hits=809,537
10 * 749340 misses=7,493,400
20 * 686607 dirtied=13,732,140

And that gives a directly computed total of 22,035,077. Close enough to
show this is working as expected. And how I computed all that should
give you an idea how you might use these numbers to extract other useful
statistics, if you'd like to tune the balance of various cost_page_*
parameters as one example. I have no idea how anyone could ever set
those relative to one another without this data, it would take epic
guessing skills.

What else can do you do with this data?

-Figure out if the VACUUM is still making progress when it appears stuck
-Estimate how long it will take to finish, based on current progress and
whatever total cost was logged last time VACUUM ran against this relation.
-Compute approximate hit rate on the read side. OS caching issues and
the ring buffer are obviously a problem with that, this isn't too valuable.
-Can see the cost split when multiple vacuums are running. This problem
is why sites can't just use "total time to vacuum" as a useful proxy to
estimate how long one will take to run.
-Easy to track the read/write ratio
-Directly measure the write rate

That last one is I think the part people are most perplexed by right
now, and this makes it trivial. How do you turn all these cost figures
into real-world read/write rates? It's been hard to do.

Now, you can take a bunch of samples of the data at 1 minute intervals,
like my little "ps | grep" example above does. The delta in the
"dirty=" column is how much was written per minute, in units of 8K
(usually) buffers. Multiply that by 8192/(60*1024*1024), and you get
MB/s out of there. I collected that data for a cleanup run of the
pgbench_accounts damage done above, CSV file with all the statistics is
attached.

I also collected OS level stats from Linux about the actual read/write
rate of the process, converted into "Write Mbps" (those are actually in
MB/s, sloppy capitalization is via OpenOffice "autocorrect"). Those
numbers are close enough to make me confident the dirty buffer totals
tracked here do turn into useful MB/s values. Sample of the most
interesting part:

Cost Delta Dirty Mbps Write Mbps
589,890 2.56 2.73
591,151 2.57 2.73
589,035 2.56 2.72
593,775 3.14 0.20
599,420 2.05 0.00
598,503 2.05 0.00
599,421 2.05 0.00
574,046 0.60 0.01
574,779 0.64 0.67
609,140 2.56 2.68
612,397 2.57 2.69
611,744 2.57 2.69
610,008 2.56 2.68

This shows the expected 600K/minute cost accumulation. And using the
dirty= numbers to compute MB/s of write speed closely matches the total
write speed of this process. Some of the difference might be I/O to
other things besides the main table here, some of it is just because OS
write caching will influence the write rate. In the spots where the OS
value and what's derived from the dirty rate diverge most, it appears to
be because vacuum is filling Linux's write cache. Actual writes
accumulated against the process them block for a while. It's a small
difference most of the time.

I'd be willing to accept a "Dirty MB/s" figure computed this way as
accurate enough for most purposes. And this patch lets you get that
data, currently unavailable without poking into the OS statistics (if at
all), just by doing a little log file and/or command string scraping.
Total at the end or real-time monitoring, based on how much work you
want to put into it. For a busy site where one or more autovacuum
processes are running most of the time, being able to monitor the vacuum
portion of the I/O this way will be a huge improvement over the current
state of things. I already have a stack of tools built on top of this
data I'm working on, and they're making it much easier to come up with
an iterative tuning process for autovacuum.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

Search Discussions

  • Euler Taveira de Oliveira at Aug 17, 2011 at 11:50 pm

    Em 17-08-2011 18:04, Greg Smith escreveu:
    Attached is a patch that tracks and displays the accumulated cost when
    autovacuum is running. Code by Noah Misch and myself. I hope this idea
    will bring a formal process to vacuum tuning, which is currently too
    hard to do. I was about to add "without..." to that, but I then realized
    it needs no disclaimer; it's just too hard, period. Vacuum issues are
    enemy #1 at all the terabyte scale customer sites I've been fighting
    with lately.
    Interesting patch. I drafted a similar idea but didn't have a chance to
    publish it. It is a complement to the idea about autovacuum tuning [1]. Hope I
    will have time to post something for the next CF. And, of course, I will
    review this patch.
    The patch updates the command string just before the workers sleep to
    show how much work they've done so far. And at the end, it adds a few
    new lines to the information written to the logs, when the autovacuum is
    notable enough to be logged at all. The overhead it adds is at most a
    few integer operations per buffer processed and a slower title string
    update once per sleep. It's trivial compared to both the vacuum itself,
    and to the instrumentation's value to sites with vacuum issues.
    I don't like exposing this information only on title processes. It would be
    difficult for client apps (for example, PGAdmin) to track this kind of
    information and it is restricted to local access. I'm not objecting to display
    this information in process title; I'm just saying that that information
    should be exposed in functions (say pg_stat_get_vacuum_[hit|miss|dirty]) too.
    I'm not sure about adding this information to incremental counters but that
    would be useful to trace a vacuum work pattern.


    [1] http://archives.postgresql.org/pgsql-hackers/2011-06/msg00678.php


    --
    Euler Taveira de Oliveira - Timbira http://www.timbira.com.br/
    PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
  • Greg Smith at Aug 18, 2011 at 1:23 am

    On 08/17/2011 07:42 PM, Euler Taveira de Oliveira wrote:
    I don't like exposing this information only on title processes. It
    would be difficult for client apps (for example, PGAdmin) to track
    this kind of information and it is restricted to local access. I'm not
    objecting to display this information in process title; I'm just
    saying that that information should be exposed in functions (say
    pg_stat_get_vacuum_[hit|miss|dirty]) too.
    I tend to build the simplest possible thing that is useful enough to
    work. The data is getting stored and shown now, where it wasn't
    before. If it's possible to expose that in additional ways later too,
    great. The big step up for this information is to go from
    "unobtainable" to "obtainable". I'd prefer not to add a quest for
    "easily obtainable" to the requirements until that big jump is made, for
    fear it will cause nothing to get delivered.

    --
    Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
    PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
  • Magnus Hagander at Aug 18, 2011 at 6:46 am

    On Thu, Aug 18, 2011 at 03:23, Greg Smith wrote:
    On 08/17/2011 07:42 PM, Euler Taveira de Oliveira wrote:

    I don't like exposing this information only on title processes. It would
    be difficult for client apps (for example, PGAdmin) to track this kind of
    information and it is restricted to local access. I'm not objecting to
    display this information in process title; I'm just saying that that
    information should be exposed in  functions (say
    pg_stat_get_vacuum_[hit|miss|dirty]) too.
    I tend to build the simplest possible thing that is useful enough to work.
    The data is getting stored and shown now, where it wasn't before.  If it's
    possible to expose that in additional ways later too, great.  The big step
    up for this information is to go from "unobtainable" to "obtainable".  I'd
    prefer not to add a quest for "easily obtainable" to the requirements until
    that big jump is made, for fear it will cause nothing to get delivered.
    By only putting it in the ps display, you exclude all the users who
    don't have an easy way to look at that information. The big group
    there is Windows, but it's not necessarily easy on all other platforms
    as well, afaik. And possibliy even more importantly, it makes it
    impossible to view it from tools like pgadmin. I think it's definitely
    worthwhile to add support to view it through the stats collector as
    well from the beginnig. The question there is if it's enough to just
    show it in the current_query (kind of like it's done in the ps
    output), or if we want a completely separate view with this info.

    Also, unrelated to that, wouldn't this information be interesting for
    non-autovacuum queries as well?
  • Dimitri Fontaine at Aug 18, 2011 at 9:49 am

    Magnus Hagander writes:
    Also, unrelated to that, wouldn't this information be interesting for
    non-autovacuum queries as well?
    I was about to say that I would like to see it for normal queries too,
    but I guess we already have it:

    => explain (analyze, buffers, costs off)
    select * from pg_attribute a join pg_class c on a.attrelid = c.oid;
    QUERY PLAN
    --------------------------------------------------------------------------------
    Hash Join (actual time=0.569..4.255 rows=2158 loops=1)
    Hash Cond: (a.attrelid = c.oid)
    Buffers: shared hit=48
    -> Seq Scan on pg_attribute a (actual time=0.008..0.462 rows=2158 loops=1)
    Buffers: shared hit=40
    -> Hash (actual time=0.541..0.541 rows=282 loops=1)
    Buckets: 1024 Batches: 1 Memory Usage: 54kB
    Buffers: shared hit=8
    -> Seq Scan on pg_class c (actual time=0.010..0.269 rows=282 loops=1)
    Buffers: shared hit=8
    Total runtime: 4.551 ms
    (11 rows)


    Also, from where I sit the ps title update for normal queries is about
    useless, as I see loads of IDLE postgresql backends in top that are
    consuming 20% and more CPU time. The refresh rate is way to low to be
    useful, and having the title it updated more frequently would probably
    consume enough CPU that it would defeat its purpose (going from
    instrumenting to slowing down enough that you can see what's happening
    is not where I'd want to go).


    Regards,
    --
    Dimitri Fontaine
    http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
  • Euler Taveira de Oliveira at Aug 18, 2011 at 10:34 am

    Em 18-08-2011 03:39, Magnus Hagander escreveu:
    Also, unrelated to that, wouldn't this information be interesting for
    non-autovacuum queries as well?
    Yes, it would. AFAICS, the patch will display that message in process titles.
    However, analyze code also uses the vacuum_delay_point(). How do you handle it?

    It would be another patch... autovacuum has an option to display summarized
    information but vacuum don't. Isn't it time to be symmetrical here?


    --
    Euler Taveira de Oliveira - Timbira http://www.timbira.com.br/
    PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
  • Tom Lane at Aug 18, 2011 at 1:57 pm

    Magnus Hagander writes:
    On Thu, Aug 18, 2011 at 03:23, Greg Smith wrote:
    On 08/17/2011 07:42 PM, Euler Taveira de Oliveira wrote:
    I don't like exposing this information only on title processes.
    I tend to build the simplest possible thing that is useful enough to work.
    By only putting it in the ps display, you exclude all the users who
    don't have an easy way to look at that information. The big group
    there is Windows, but it's not necessarily easy on all other platforms
    as well, afaik.
    Yeah. Also, process title updates are friggin expensive on some
    platforms --- so much so, that we have a GUC setting to disable them.
    So I don't think we should use that technique at all. Put the info
    into the stats collector instead (not "also").

    regards, tom lane
  • Peter Eisentraut at Aug 18, 2011 at 2:54 pm

    On tor, 2011-08-18 at 08:39 +0200, Magnus Hagander wrote:
    Also, unrelated to that, wouldn't this information be interesting for
    non-autovacuum queries as well?
    Last year we were discussing some details on progress reporting, and
    some people suggested that instead of printing a single percentage, we
    should let each type of activity print out whatever metrics it has that
    would allow an experienced DBA to track the progress. Effectively, this
    is what this patch is trying to do.

    So how about adding a column to pg_stat_activity, progress_metrics or
    something like that, and add that information there.
  • Robert Haas at Aug 18, 2011 at 3:13 pm

    On Thu, Aug 18, 2011 at 10:54 AM, Peter Eisentraut wrote:
    On tor, 2011-08-18 at 08:39 +0200, Magnus Hagander wrote:
    Also, unrelated to that, wouldn't this information be interesting for
    non-autovacuum queries as well?
    Last year we were discussing some details on progress reporting, and
    some people suggested that instead of printing a single percentage, we
    should let each type of activity print out whatever metrics it has that
    would allow an experienced DBA to track the progress.  Effectively, this
    is what this patch is trying to do.

    So how about adding a column to pg_stat_activity, progress_metrics or
    something like that, and add that information there.
    That might be a good way to go. I don't think we want something like
    pg_stat_all_tables for this, because it seems that Greg's use case is
    to be able to see how a *particular* autovacuum process is doing
    *while it's running*, not to look at aggregate statistics over time.
    Putting it in pg_stat_activity would be good for that.

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Magnus Hagander at Aug 18, 2011 at 3:14 pm

    On Thu, Aug 18, 2011 at 17:13, Robert Haas wrote:
    On Thu, Aug 18, 2011 at 10:54 AM, Peter Eisentraut wrote:
    On tor, 2011-08-18 at 08:39 +0200, Magnus Hagander wrote:
    Also, unrelated to that, wouldn't this information be interesting for
    non-autovacuum queries as well?
    Last year we were discussing some details on progress reporting, and
    some people suggested that instead of printing a single percentage, we
    should let each type of activity print out whatever metrics it has that
    would allow an experienced DBA to track the progress.  Effectively, this
    is what this patch is trying to do.

    So how about adding a column to pg_stat_activity, progress_metrics or
    something like that, and add that information there.
    That might be a good way to go.  I don't think we want something like
    pg_stat_all_tables for this, because it seems that Greg's use case is
    to be able to see how a *particular* autovacuum process is doing
    *while it's running*, not to look at aggregate statistics over time.
    Putting it in pg_stat_activity would be good for that.
    It's also good to have it broken down into multiple columns, and not
    just a freetext column in the view - if tools should be able to parse
    it as well.
  • Robert Haas at Aug 18, 2011 at 3:24 pm

    On Thu, Aug 18, 2011 at 11:14 AM, Magnus Hagander wrote:
    On Thu, Aug 18, 2011 at 17:13, Robert Haas wrote:
    On Thu, Aug 18, 2011 at 10:54 AM, Peter Eisentraut wrote:
    On tor, 2011-08-18 at 08:39 +0200, Magnus Hagander wrote:
    Also, unrelated to that, wouldn't this information be interesting for
    non-autovacuum queries as well?
    Last year we were discussing some details on progress reporting, and
    some people suggested that instead of printing a single percentage, we
    should let each type of activity print out whatever metrics it has that
    would allow an experienced DBA to track the progress.  Effectively, this
    is what this patch is trying to do.

    So how about adding a column to pg_stat_activity, progress_metrics or
    something like that, and add that information there.
    That might be a good way to go.  I don't think we want something like
    pg_stat_all_tables for this, because it seems that Greg's use case is
    to be able to see how a *particular* autovacuum process is doing
    *while it's running*, not to look at aggregate statistics over time.
    Putting it in pg_stat_activity would be good for that.
    It's also good to have it broken down into multiple columns, and not
    just a freetext column in the view - if tools should be able to parse
    it as well.
    True. We could have a separate system view that only shows the status
    of currently-running vacuum proceses. That wouldn't bother me either.

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Magnus Hagander at Aug 18, 2011 at 3:31 pm

    On Thu, Aug 18, 2011 at 17:23, Robert Haas wrote:
    On Thu, Aug 18, 2011 at 11:14 AM, Magnus Hagander wrote:
    On Thu, Aug 18, 2011 at 17:13, Robert Haas wrote:
    On Thu, Aug 18, 2011 at 10:54 AM, Peter Eisentraut wrote:
    On tor, 2011-08-18 at 08:39 +0200, Magnus Hagander wrote:
    Also, unrelated to that, wouldn't this information be interesting for
    non-autovacuum queries as well?
    Last year we were discussing some details on progress reporting, and
    some people suggested that instead of printing a single percentage, we
    should let each type of activity print out whatever metrics it has that
    would allow an experienced DBA to track the progress.  Effectively, this
    is what this patch is trying to do.

    So how about adding a column to pg_stat_activity, progress_metrics or
    something like that, and add that information there.
    That might be a good way to go.  I don't think we want something like
    pg_stat_all_tables for this, because it seems that Greg's use case is
    to be able to see how a *particular* autovacuum process is doing
    *while it's running*, not to look at aggregate statistics over time.
    Putting it in pg_stat_activity would be good for that.
    It's also good to have it broken down into multiple columns, and not
    just a freetext column in the view - if tools should be able to parse
    it as well.
    True.  We could have a separate system view that only shows the status
    of currently-running vacuum proceses.  That wouldn't bother me either.
    That's what I'd like to have. We could also have aggregate counters on
    the table/database level of course.
  • Greg Smith at Aug 18, 2011 at 3:55 pm

    On 08/18/2011 10:54 AM, Peter Eisentraut wrote:
    So how about adding a column to pg_stat_activity, progress_metrics or
    something like that, and add that information there.
    Adding a field here (I'd go for the simpler "progress") and updating it
    regularly would be a reasonable way to go here. This data doesn't
    really need to go into the traditional statistics infrastructure to be
    useful. I didn't start there because I was already getting pushback on
    overloading the stats collector with constantly updated metrics last
    time I did something in this area. I wasn't going to try and argue why
    it was worth it in this case, just like I'm not going to argue about the
    complaint over the command string overhead being too high--just going to
    not do that instead. If the bikeshed I built doesn't look fancy enough
    to hold the bike I put in there, I'm not going to build a better one
    right now--I'll just put a cheaper bike in there instead.

    I was hoping to eventually take the useful summary bits at the end, the
    totals, and save those into statistics somewhere each time a VACUUM of
    either sort finishes. It would fit with the information shown in
    pg_stat_tables, but that's obviously getting too wide. Breaking out a
    pg_stat_autovacuum view that contains all the relevant bits currently
    shown in that view, plus these 3 new fields, would be a reasonable start.

    --
    Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
    PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
  • Magnus Hagander at Aug 18, 2011 at 4:00 pm

    On Thu, Aug 18, 2011 at 17:54, Greg Smith wrote:
    On 08/18/2011 10:54 AM, Peter Eisentraut wrote:

    So how about adding a column to pg_stat_activity, progress_metrics or
    something like that, and add that information there.
    Adding a field here (I'd go for the simpler "progress") and updating it
    regularly would be a reasonable way to go here.  This data doesn't really
    need to go into the traditional statistics infrastructure to be useful.  I
    didn't start there because I was already getting pushback on overloading the
    stats collector with constantly updated metrics last time I did something in
    this area.  I wasn't going to try and argue why it was worth it in this
    case, just like I'm not going to argue about the complaint over the command
    string overhead being too high--just going to not do that instead.  If the
    bikeshed I built doesn't look fancy enough to hold the bike I put in there,
    I'm not going to build a better one right now--I'll just put a cheaper bike
    in there instead.
    The "current values per-backend" thing can go in shared memory. The
    reason the per table ones can't is obviously that they go away when
    the backend disconnects..

    I was hoping to eventually take the useful summary bits at the end, the
    totals, and save those into statistics somewhere each time a VACUUM of
    either sort finishes.  It would fit with the information shown in
    pg_stat_tables, but that's obviously getting too wide.  Breaking out a
    pg_stat_autovacuum view that contains all the relevant bits currently shown
    in that view, plus these 3 new fields, would be a reasonable start.
    That depends on what you mea nby too wide. If it's intended to be
    consumed with "SELECT *" or not...
  • Euler Taveira de Oliveira at Aug 18, 2011 at 5:56 pm

    Em 18-08-2011 12:54, Greg Smith escreveu:
    I was hoping to eventually take the useful summary bits at the end, the
    totals, and save those into statistics somewhere each time a VACUUM of
    either sort finishes. It would fit with the information shown in
    pg_stat_tables, but that's obviously getting too wide. Breaking out a
    pg_stat_autovacuum view that contains all the relevant bits currently
    shown in that view, plus these 3 new fields, would be a reasonable start.
    IMHO the useful summary bits belongs to log. If you want to add it to stats
    collector go for it. But if you go to the latter road, it is recommended to
    move some fields (time-related fields) from pg_stat_*_tables to this new view
    (pg_stat_maintenance?). I don't know how generic you want to go but have in
    mind I would like to cover automatic and manual maintenance commands.

    Besides that another view will cover the maintenance activity. This new view
    could contain at least datname, schemaname, relname, command_start,
    command_schedule, operation, progress (?), procpid, and current_command. The
    name has to be generic to cover all maintenance commands (perhaps
    pg_maintenance_activity).


    --
    Euler Taveira de Oliveira - Timbira http://www.timbira.com.br/
    PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
  • Robert Haas at Aug 18, 2011 at 2:13 pm

    On Wed, Aug 17, 2011 at 9:23 PM, Greg Smith wrote:
    On 08/17/2011 07:42 PM, Euler Taveira de Oliveira wrote:

    I don't like exposing this information only on title processes. It would
    be difficult for client apps (for example, PGAdmin) to track this kind of
    information and it is restricted to local access. I'm not objecting to
    display this information in process title; I'm just saying that that
    information should be exposed in  functions (say
    pg_stat_get_vacuum_[hit|miss|dirty]) too.
    I tend to build the simplest possible thing that is useful enough to work.
    The data is getting stored and shown now, where it wasn't before.  If it's
    possible to expose that in additional ways later too, great.  The big step
    up for this information is to go from "unobtainable" to "obtainable".  I'd
    prefer not to add a quest for "easily obtainable" to the requirements until
    that big jump is made, for fear it will cause nothing to get delivered.
    Perhaps a reasonable way to break up the patch would be:

    - Part 1: Gather the information and display it in the
    log_autovacuum_min_duration output.
    - Part 2: Add the ability to see the information incrementally (via
    some mechanism yet to be agreed upon).

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Greg Smith at Aug 18, 2011 at 3:41 pm

    On 08/18/2011 10:12 AM, Robert Haas wrote:
    Perhaps a reasonable way to break up the patch would be:
    - Part 1: Gather the information and display it in the
    log_autovacuum_min_duration output.
    - Part 2: Add the ability to see the information incrementally (via
    some mechanism yet to be agreed upon).
    My reaction to all the suggestions for redesign is just that: pull out
    the part that does the incremental updates altogether, improve the part
    that dumps the info into the logs, and resubmit without any incremental
    progress for now. This is much more valuable to me if the first commit
    that hits is something I can backport trivially. I'm seeing enough
    production servers running into this problem right now on earlier
    versions to be worried about that, and the log dump at the end would be
    a huge help even if that was all they got. I'm going to add directly
    computing the write MB/s figure from the dirty data written too, since
    that ends up being the thing that I keep deriving by hand anyway.

    --
    Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
    PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
  • Robert Haas at Aug 18, 2011 at 4:48 pm

    On Thu, Aug 18, 2011 at 11:41 AM, Greg Smith wrote:
    On 08/18/2011 10:12 AM, Robert Haas wrote:
    Perhaps a reasonable way to break up the patch would be:
    - Part 1: Gather the information and display it in the
    log_autovacuum_min_duration output.
    - Part 2: Add the ability to see the information incrementally (via
    some mechanism yet to be agreed upon).
    My reaction to all the suggestions for redesign is just that: pull out the
    part that does the incremental updates altogether, improve the part that
    dumps the info into the logs, and resubmit without any incremental progress
    for now.  This is much more valuable to me if the first commit that hits is
    something I can backport trivially.  I'm seeing enough production servers
    running into this problem right now on earlier versions to be worried about
    that, and the log dump at the end would be a huge help even if that was all
    they got.  I'm going to add directly computing the write MB/s figure from
    the dirty data written too, since that ends up being the thing that I keep
    deriving by hand anyway.
    By the way, since I forgot to say it earlier, I think it's great that
    you are working on some of this instrumentation stuff, so +1 for the
    basic concept here.

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Jim Nasby at Aug 22, 2011 at 9:55 pm

    On Aug 18, 2011, at 10:41 AM, Greg Smith wrote:
    that was all they got. I'm going to add directly computing the write MB/s figure from the dirty data written too, since that ends up being the thing that I keep deriving by hand anyway.
    I know folks have talked about progress, but I haven't seen anything specific... could you add info about what table/index vacuum is working on, and how far along it is? I realize that's not very close to an actual % completion, but it's far better than what we have right now.

    FWIW, the number I end up caring about isn't so much write traffic as read. Thanks to a good amount of battery-backed write cache (and possibly some iSCSI misconfiguration), our writes are generally much cheaper than our reads.
    --
    Jim C. Nasby, Database Architect jim@nasby.net
    512.569.9461 (cell) http://jim.nasby.net
  • Greg Smith at Aug 23, 2011 at 3:48 am

    On 08/22/2011 05:54 PM, Jim Nasby wrote:
    I know folks have talked about progress, but I haven't seen anything
    specific... could you add info about what table/index vacuum is
    working on, and how far along it is? I realize that's not very close
    to an actual % completion, but it's far better than what we have right
    now.
    Due to complaints about the mechanism the first version used to inform
    the user of the progress, I'm yanking that from the next patch
    altogether. The goal for now is to get a good report into the logs, and
    then maybe that gets extended later with a progress report. (All of the
    proposed alternate mechanisms are way more complicated than anything I
    have time to do right now)
    FWIW, the number I end up caring about isn't so much write traffic as read. Thanks to a good amount of battery-backed write cache (and possibly some iSCSI misconfiguration), our writes are generally much cheaper than our reads.
    VACUUM can't really know its true read rate from what's inside the
    database. I can add a summary of the accumulated read amounts into the
    logs, in more useful figures than what is provided so far, which is
    better than nothing. But those will be kind of deceptive, which is one
    reason I wasn't so focused on them yet. If the relation is largely in
    the OS cache, but not the PostgreSQL one, the summary can show a read
    rate even when that isn't actually doing any reads at all. That was
    exactly the case in the sample data I posted. VACUUM thought it was
    reading anywhere from 2.5 to 6MB/s. But at the OS level, it was
    actually reading zero bytes, since the whole thing was in cache already.

    What you actually want is a count of the accumulated read counters at
    the OS level. I've recently figured out how to track those, too, but
    that code is something that lives outside the database. If this is
    something useful to you, I think you're about to sign up to be my next
    beta tester for that program.

    --
    Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
    PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
  • Greg Smith at Aug 28, 2011 at 8:35 am
    Attached patch includes "math is hard" reworking, so it displays the
    average write rate in the log output automatically:

    LOG: automatic vacuum of table "pgbench.public.pgbench_accounts": index
    scans: 1
    pages: 0 removed, 163935 remain
    tuples: 2000000 removed, 4625165 remain
    buffer usage: 111901 hits, 123348 misses, 102351 dirtied, 23.365
    MiB/s write rate
    system usage: CPU 1.84s/4.22u sec elapsed 34.22 sec

    All of the updates to the process title are gone, in favor of some
    progress report mechanism TBD. The summary is much more important than
    the progress tracking part as far as I'm concerned, I don't mind
    splitting things apart to try and get this part in earlier.

    --
    Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
    PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
  • Greg Smith at Aug 28, 2011 at 9:55 am
    Updated patch cleans up two diff mistakes made when backing out the
    progress report feature. The tip-off I screwed up should have been the
    absurdly high write rate shown. The usleep was accidentally deleted, so
    it was running without cost limits even applying. Here's a good one
    instead:

    LOG: automatic vacuum of table "pgbench.public.pgbench_accounts": index
    scans: 1
    pages: 0 removed, 163935 remain
    tuples: 2000000 removed, 2928356 remain
    buffer usage: 117393 hits, 123351 misses, 102684 dirtied, 2.168
    MiB/s write rate
    system usage: CPU 2.54s/6.27u sec elapsed 369.99 sec

    --
    Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
    PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
  • Robert Haas at Aug 29, 2011 at 3:03 pm

    On Sun, Aug 28, 2011 at 5:54 AM, Greg Smith wrote:
    Updated patch cleans up two diff mistakes made when backing out the progress
    report feature.  The tip-off I screwed up should have been the absurdly high
    write rate shown.  The usleep was accidentally deleted, so it was running
    without cost limits even applying.  Here's a good one instead:

    LOG:  automatic vacuum of table "pgbench.public.pgbench_accounts": index
    scans: 1
    pages: 0 removed, 163935 remain
    tuples: 2000000 removed, 2928356 remain
    buffer usage: 117393 hits, 123351 misses, 102684 dirtied, 2.168 MiB/s
    write rate
    system usage: CPU 2.54s/6.27u sec elapsed 369.99 sec
    I postulate that this change isn't needed until you get to the part
    where we do some kind of intermediate progress reporting:

    ! if (Log_autovacuum_min_duration > 0 || VacuumCostActive)

    Instead of doing this only when vacuum costing is active, could we
    drive it off of the pgBufferUsage stuff (maybe with a few tweaks...)
    and do it unconditionally?

    To me it seems like it would better to say "write rate xyz MB/s"
    rather than "xyz MB/s write rate", but maybe I'm in the minority on
    that one.

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Greg Smith at Aug 29, 2011 at 7:24 pm

    On 08/29/2011 11:03 AM, Robert Haas wrote:
    Instead of doing this only when vacuum costing is active, could we
    drive it off of the pgBufferUsage stuff (maybe with a few tweaks...)
    and do it unconditionally?
    Sure. I've wondered about an ever larger refactoring, to reorient
    vacuum costing around completely: drive it all from the pgBufferUsage
    side rather than running its own totals. I didn't even start wandering
    down that path yet because of time constraints, plus the desire to have
    something I could backport to installs having VACUUM issues on earlier
    versions. This code I'd backport without hesitation; something that
    wanders toward a more complicated rearrangement becomes harder to deal with.
    To me it seems like it would better to say "write rate xyz MB/s"
    rather than "xyz MB/s write rate", but maybe I'm in the minority on
    that one.
    I was just trying to mimic the style of the logging already there as
    closely as I could. I don't like the way the existing log message looks
    either. I wasn't going to ignore its style over that though.

    --
    Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
    PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
  • Shigeru Hanada at Sep 26, 2011 at 1:02 pm
    Hi Greg,

    (2011/08/28 18:54), Greg Smith wrote:
    Updated patch cleans up two diff mistakes made when backing out the
    progress report feature. The tip-off I screwed up should have been the
    absurdly high write rate shown. The usleep was accidentally deleted, so
    it was running without cost limits even applying. Here's a good one
    instead:

    LOG: automatic vacuum of table "pgbench.public.pgbench_accounts": index
    scans: 1
    pages: 0 removed, 163935 remain
    tuples: 2000000 removed, 2928356 remain
    buffer usage: 117393 hits, 123351 misses, 102684 dirtied, 2.168 MiB/s
    write rate
    system usage: CPU 2.54s/6.27u sec elapsed 369.99 sec
    I took a look at your patch, and it seems fine about fundamental
    functionality, though the patch needed to be rebased against current
    HEAD. Please see attached patch which I used for review.

    IIUC, this patch provides:
    * Three counters, which are used to keep # of buffers which were (1)
    Hits: found in shared buffers, (2) Missed: not found in shared buffers,
    and (3) Dirtied: marked as dirty, in an autovacuum of a relation.
    These counters are used only when cost-based autovacuum is enabled by
    setting vacuum_cost_delay to non-zero.
    * Capability to report # of buffers above, and buffer write rate
    (MiB/sec) in the existing autovacuum logging message, only when actual
    duration > autovacuum_min_duration, and cost-based autovacuum is enabled.

    I think one concern is the way showing statistics. If showing summary
    of statistics (at the end of an autovacuum) in server log is enough,
    current implementation is fine. Also showing progress report in server
    log would be easy to achieve. In contrast, reporting progress via
    another backend would require shared memory or statistics collector,
    rather than per-process global variables. Anyway, this patch can be the
    base of such enhancement.

    There are some trivial comments:
    * Local variables added by the patch (secs, usecs, write_rate and
    endtime) can be moved into narrower scope.
    * Initializing starttime to zero seems unnecessary.

    In addition, documents about how to use the statistics would be
    necessary, maybe in "23.1.5. The Autovacuum Daemon".

    I'll set the status of this patch to waiting-on-author. Would you rebase
    the patch and post it again?

    Regards,
    --
    Shigeru Hanada
  • Alvaro Herrera at Sep 29, 2011 at 5:40 pm
    I reviewed this patch. My question for you is: does it make sense to
    enable to reporting of write rate even when vacuum cost accounting is
    enabled? In my opinion it would be useful to do so. If you agree,
    please submit an updated patch.

    --
    Álvaro Herrera <alvherre@commandprompt.com>
    The PostgreSQL Company - Command Prompt, Inc.
    PostgreSQL Replication, Consulting, Custom Development, 24x7 support
  • Greg Smith at Oct 5, 2011 at 7:02 am

    On 09/29/2011 10:40 AM, Alvaro Herrera wrote:
    I reviewed this patch. My question for you is: does it make sense to
    enable to reporting of write rate even when vacuum cost accounting is
    enabled? In my opinion it would be useful to do so. If you agree,
    please submit an updated patch.
    Presumably you meant to ask if this makes sense to show when cost
    accounting isn't enabled, because the code doesn't do that right now.
    No cost accounting, no buffer usage/write rate data as this was submitted.

    Looks like making this work even in cases where cost accounting isn't on
    will make the patch a bit larger obtrusive, but it's not unreasonable.
    Now that you mention it, people who do a manual, full-speed VACUUM would
    certainly appreciate some feedback on the rate it ran at. I'll include
    that in the next update.

    --
    Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
    PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
  • Alvaro Herrera at Oct 5, 2011 at 11:44 am

    Excerpts from Greg Smith's message of mié oct 05 04:02:12 -0300 2011:
    On 09/29/2011 10:40 AM, Alvaro Herrera wrote:
    I reviewed this patch. My question for you is: does it make sense to
    enable to reporting of write rate even when vacuum cost accounting is
    enabled? In my opinion it would be useful to do so. If you agree,
    please submit an updated patch.
    Presumably you meant to ask if this makes sense to show when cost
    accounting isn't enabled, because the code doesn't do that right now.
    No cost accounting, no buffer usage/write rate data as this was submitted.
    Yes, sorry, that's what I meant.
    Looks like making this work even in cases where cost accounting isn't on
    will make the patch a bit larger obtrusive, but it's not unreasonable.
    Now that you mention it, people who do a manual, full-speed VACUUM would
    certainly appreciate some feedback on the rate it ran at. I'll include
    that in the next update.
    Thanks.

    --
    Álvaro Herrera <alvherre@commandprompt.com>
    The PostgreSQL Company - Command Prompt, Inc.
    PostgreSQL Replication, Consulting, Custom Development, 24x7 support
  • Greg Smith at Nov 16, 2011 at 7:52 am

    On 10/05/2011 03:02 AM, Greg Smith wrote:
    Presumably you meant to ask if this makes sense to show when cost
    accounting isn't enabled, because the code doesn't do that right now.
    No cost accounting, no buffer usage/write rate data as this was
    submitted.
    This is done in the attached update. I just made the page accounting
    happen all the time, regardless of whether the costs were being
    accumulated. Added a read rate too, which is how fast reads happened
    from the OS cache to shared_buffers. Simple test case generates a 600MB
    pgbench_accounts database and wipes out enough to take a while to clean
    up; it needs log_autovacuum_min_duration = 0 and then:

    $ createdb pgbench
    $ pgbench -i -s 10 pgbench
    $ psql -d pgbench -c "delete from pgbench_accounts where aid<200000"

    LOG: automatic vacuum of table "pgbench.public.pgbench_accounts": index
    scans: 1
    pages: 0 removed, 16394 remain
    tuples: 199999 removed, 640011 remain
    buffer usage: 13742 hits, 2708 misses, 1058 dirtied
    avg read rate: 3.067 MiB/s, avg write rate: 1.198 MiB/s
    system usage: CPU 0.05s/0.61u sec elapsed 6.89 sec
    Now that you mention it, people who do a manual, full-speed VACUUM
    would certainly appreciate some feedback on the rate it ran at.
    This is more of a pain because this whole code path is only active when
    IsAutoVacuumWorkerProcess. I have some larger refactoring in mind to
    perhaps make that more feasible. I didn't want to hold this update
    aiming at the more valuable autovac case for that though, can always
    layer it on later.

    --
    Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
    PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
  • Alvaro Herrera at Nov 25, 2011 at 4:39 pm

    Excerpts from Greg Smith's message of mié nov 16 04:37:43 -0300 2011:
    On 10/05/2011 03:02 AM, Greg Smith wrote:
    Presumably you meant to ask if this makes sense to show when cost
    accounting isn't enabled, because the code doesn't do that right now.
    No cost accounting, no buffer usage/write rate data as this was
    submitted.
    This is done in the attached update. I just made the page accounting
    happen all the time, regardless of whether the costs were being
    accumulated. Added a read rate too, which is how fast reads happened
    from the OS cache to shared_buffers. Simple test case generates a 600MB
    pgbench_accounts database and wipes out enough to take a while to clean
    up; it needs log_autovacuum_min_duration = 0 and then:

    $ createdb pgbench
    $ pgbench -i -s 10 pgbench
    $ psql -d pgbench -c "delete from pgbench_accounts where aid<200000"

    LOG: automatic vacuum of table "pgbench.public.pgbench_accounts": index
    scans: 1
    pages: 0 removed, 16394 remain
    tuples: 199999 removed, 640011 remain
    buffer usage: 13742 hits, 2708 misses, 1058 dirtied
    avg read rate: 3.067 MiB/s, avg write rate: 1.198 MiB/s
    system usage: CPU 0.05s/0.61u sec elapsed 6.89 sec
    I was about to commit this when I noticed that the avg values may not be
    all that representative of reality after all; consider that it's
    computed across the whole duration of the vacuuming operation, including
    the index scans ... it'd be possibly useful to keep separate timings for
    the heap scan (which is likely to use I/O more quickly) from index
    scans. That way you can tune for the max, not a possibly misleading
    average. That's a much larger change though, so I'm not going to get
    into it.

    Does anybody else think this would be worthwhile? If so we can stick it
    into the TODO with an "easy" tag for someone to tackle -- seems like a
    useful first project.


    One funny thing in the test I did was that the buffer count might add to
    a much larger amount than total disk pages:

    LOG: automatic vacuum of table "alvherre.public.foo": index scans: 4
    pages: 0 removed, 8850 remain
    tuples: 666668 removed, 1333332 remain
    buffer usage: 14675 hits, 33857 misses, 20274 dirtied
    avg read rate: 2.823 MiB/s, avg write rate: 1.690 MiB/s
    system usage: CPU 1.26s/8.08u sec elapsed 93.69 sec

    The table and index:

    alvherre=# select relname, relpages from pg_class where relname like 'foo%';
    relname | relpages
    ---------+----------
    foo | 8850
    foo_idx | 5487
    (2 filas)

    My guess is that this is roughly counting three heap scans plus the four
    index scans mentioned in the log report (there were so many because I
    reduced maintenance_work_mem to its 1 MB minimum):

    alvherre=# select 5487 * 4 + 8850 * 3;
    ?column?
    ----------
    48498
    (1 fila)

    alvherre=# select 14675 + 33857;
    ?column?
    ----------
    48532
    (1 fila)



    My test case was

    create table foo (a int);
    insert into foo select * from generate_series (1, 2000000);
    create index foo_idx on foo (a);
    delete from foo where a % 6 in (1,2);

    I then checkpointed before autovac had the chance to process the table,
    just to see a higher number of pages dirtied. Note how the number of
    pages dirtied is also much higher the total number of existing pages!

    I'm going to push this now anyway, thanks.

    --
    Álvaro Herrera <alvherre@commandprompt.com>
    The PostgreSQL Company - Command Prompt, Inc.
    PostgreSQL Replication, Consulting, Custom Development, 24x7 support
  • Robert Haas at Nov 26, 2011 at 1:11 am

    On Fri, Nov 25, 2011 at 11:39 AM, Alvaro Herrera wrote:
    I'm going to push this now anyway, thanks.
    This patch adds a count of the number of buffers dirtied to VACUUM,
    but it strikes me that it would be useful to add similar tracking to
    pgBufferUsage. Attached is a patch for that. You can see the new
    counters through pg_stat_statements or with EXPLAIN (ANALYZE,
    BUFFERS). This is useful because the number of buffers that a query
    *writes* doesn't necessarily have much to do with anything - it may
    end up writing buffers dirtied by other queries while being read-only
    itself, or conversely it may not write anything at all even though it
    dirties quite a bit.

    Thoughts? Comments? Objections?

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Greg Smith at Nov 28, 2011 at 9:09 am

    On 11/25/2011 05:10 PM, Robert Haas wrote:
    This patch adds a count of the number of buffers dirtied to VACUUM,
    but it strikes me that it would be useful to add similar tracking to
    pgBufferUsage. Attached is a patch for that. You can see the new
    counters through pg_stat_statements or with EXPLAIN (ANALYZE,
    BUFFERS).
    This looks pretty useful to me. I just threw it into the current
    CommitFest, on the basis that there's already so many other thing trying
    to whack around pg_stat_statements right now we might as well keep them
    together. Let me clear my queue of patches submitted on time I need to
    do something with (re-review Scott Mead's pg_stat_activity change,
    respond to Jeff Janes on relation free space) and I'll take a quick spin
    on this one.
  • Fujii Masao at Feb 17, 2012 at 10:04 am

    On Sat, Nov 26, 2011 at 10:10 AM, Robert Haas wrote:
    On Fri, Nov 25, 2011 at 11:39 AM, Alvaro Herrera
    wrote:
    I'm going to push this now anyway, thanks.
    This patch adds a count of the number of buffers dirtied to VACUUM,
    but it strikes me that it would be useful to add similar tracking to
    pgBufferUsage.  Attached is a patch for that.  You can see the new
    counters through pg_stat_statements or with EXPLAIN (ANALYZE,
    BUFFERS).  This is useful because the number of buffers that a query
    *writes* doesn't necessarily have much to do with anything - it may
    end up writing buffers dirtied by other queries while being read-only
    itself, or conversely it may not write anything at all even though it
    dirties quite a bit.

    Thoughts?  Comments?  Objections?
    Here are review comments:

    The document about EXPLAIN needs to be updated.

    You forgot to add the long-integer-valued property of shared/local_blks_dirtied.
    So when I ran EXPLAIN and used json as a format, no information about
    blks_dirtied
    was reported.

    Regards,

    --
    Fujii Masao
    NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    NTT Open Source Software Center
  • Robert Haas at Feb 17, 2012 at 5:16 pm

    On Fri, Feb 17, 2012 at 5:04 AM, Fujii Masao wrote:
    Here are review comments:

    The document about EXPLAIN needs to be updated.

    You forgot to add the long-integer-valued property of shared/local_blks_dirtied.
    So when I ran EXPLAIN and used json as a format, no information about
    blks_dirtied
    was reported.
    Thanks for the review. Updated patch attached.

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Fujii Masao at Feb 20, 2012 at 7:49 am

    On Sat, Feb 18, 2012 at 2:16 AM, Robert Haas wrote:
    On Fri, Feb 17, 2012 at 5:04 AM, Fujii Masao wrote:
    Here are review comments:

    The document about EXPLAIN needs to be updated.

    You forgot to add the long-integer-valued property of shared/local_blks_dirtied.
    So when I ran EXPLAIN and used json as a format, no information about
    blks_dirtied
    was reported.
    Thanks for the review.  Updated patch attached.
    Thanks for updating the patch!

    The patch looks good to me. But I have three minor comments:


    In pg_stat_statements--1.1.sql
    +/* contrib/pg_stat_statements/pg_stat_statements--1.0.sql */

    Typo: s/1.0/1.1


    In pg_stat_statements--1.0--1.1.sql, we should complain if script is sourced
    in psql, as follows?

    \echo Use "ALTER EXTENSION pg_stat_statements UPDATE TO '1.1'" to
    load this file. \quit


    +DATA = pg_stat_statements--1.1.sql pg_stat_statements--1.0--1.1.sql \
    + pg_stat_statements--unpackaged--1.0.sql

    Though I'm not familiar with CREATE EXTENSION. Why did you exclude 1.0.sql
    from DATA? In hstore/Makefile, 1.0.sql is included. You think we should prevent
    old version (i.e., 1.0) of pg_stat_statements from being used in 9.2?

    Regards,

    --
    Fujii Masao
    NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    NTT Open Source Software Center
  • Robert Haas at Feb 20, 2012 at 2:01 pm

    On Mon, Feb 20, 2012 at 2:49 AM, Fujii Masao wrote:
    In pg_stat_statements--1.0--1.1.sql, we should complain if script is sourced
    in psql, as follows?

    \echo Use "ALTER EXTENSION pg_stat_statements UPDATE TO '1.1'" to
    load this file. \quit
    Yeah, maybe. I don't know if we want to put that in every file
    forever, but I guess it probably makes sense to do it here.
    +DATA = pg_stat_statements--1.1.sql pg_stat_statements--1.0--1.1.sql \
    +       pg_stat_statements--unpackaged--1.0.sql

    Though I'm not familiar with CREATE EXTENSION. Why did you exclude 1.0.sql
    from DATA? In hstore/Makefile, 1.0.sql is included. You think we should prevent
    old version (i.e., 1.0) of pg_stat_statements from being used in 9.2?
    I'm not sure. My feeling is that we probably don't want to ship all
    the old scripts forever. People should install the latest version,
    and use the upgrade scripts to get there if they have an older one.
    So my gut feeling here is to change hstore to exclude that file rather
    than adding it here. Any other opinions?

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Fujii Masao at Feb 21, 2012 at 3:59 am

    On Mon, Feb 20, 2012 at 11:00 PM, Robert Haas wrote:
    On Mon, Feb 20, 2012 at 2:49 AM, Fujii Masao wrote:
    +DATA = pg_stat_statements--1.1.sql pg_stat_statements--1.0--1.1.sql \
    +       pg_stat_statements--unpackaged--1.0.sql

    Though I'm not familiar with CREATE EXTENSION. Why did you exclude 1.0.sql
    from DATA? In hstore/Makefile, 1.0.sql is included. You think we should prevent
    old version (i.e., 1.0) of pg_stat_statements from being used in 9.2?
    I'm not sure.  My feeling is that we probably don't want to ship all
    the old scripts forever.  People should install the latest version,
    and use the upgrade scripts to get there if they have an older one.
    So my gut feeling here is to change hstore to exclude that file rather
    than adding it here.  Any other opinions?
    Agreed. But I wonder why VERSION option is usable in CREATE EXTENSION
    if people always should use the latest version. Maybe I'm missing something..
    Anyway, shipping v1.0 of pg_stat_statement seems less useful in 9.2, so
    I agree to exclude it.

    Regards,

    --
    Fujii Masao
    NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    NTT Open Source Software Center
  • Dimitri Fontaine at Feb 21, 2012 at 9:44 pm

    Fujii Masao writes:
    Though I'm not familiar with CREATE EXTENSION. Why did you exclude 1.0.sql
    from DATA? In hstore/Makefile, 1.0.sql is included. You think we should prevent
    old version (i.e., 1.0) of pg_stat_statements from being used in 9.2?
    I'm not sure.  My feeling is that we probably don't want to ship all
    the old scripts forever.  People should install the latest version,
    and use the upgrade scripts to get there if they have an older one.
    So my gut feeling here is to change hstore to exclude that file rather
    than adding it here.  Any other opinions?
    The problem with the hstore scripts is that you had to copy the 1.0
    script, change a couple of lines, and call that 1.1, and you also had to
    provide the 1.0--1.1 script file.

    The solution would be to be able to create hstore 1.1 from 1.0
    automatically and I sent over a very simple patch to do that, albeit
    after the deadline for the current CF (that's why it's not listed).

    Maybe that's simple enough to be considered? (re-attaching here)

    b/contrib/hstore/Makefile | 2
    b/contrib/hstore/hstore.control | 1
    b/src/backend/commands/extension.c | 83 +++--
    contrib/hstore/hstore--1.1.sql | 524 -------------------------------------
    4 files changed, 57 insertions(+), 553 deletions(-)
    Agreed. But I wonder why VERSION option is usable in CREATE EXTENSION
    if people always should use the latest version. Maybe I'm missing something..
    I think not that many people are using 9.1 in production already. Also
    bear in mind that the mechanism is not made only for contrib, it makes
    sense to ship in-house procedure code as an extension too.

    Regards,
    --
    Dimitri Fontaine
    http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
  • Greg Smith at Feb 22, 2012 at 4:28 am

    On 02/21/2012 04:44 PM, Dimitri Fontaine wrote:
    The solution would be to be able to create hstore 1.1 from 1.0
    automatically and I sent over a very simple patch to do that, albeit
    after the deadline for the current CF (that's why it's not listed).

    Maybe that's simple enough to be considered? (re-attaching here)
    I can't find any message that matches this description--from you,
    touches extensions in this way, and was submitted after the CF
    deadline. Help? If that's something that ripples out to impacting how
    changes to upgraded extensions should look, that should get its own
    thread I think. This one is pretty deep and already far off its titled
    topic.

    --
    Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
    PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
  • Dimitri Fontaine at Feb 22, 2012 at 10:07 am

    Greg Smith writes:
    On 02/21/2012 04:44 PM, Dimitri Fontaine wrote:
    The solution would be to be able to create hstore 1.1 from 1.0
    automatically and I sent over a very simple patch to do that, albeit
    after the deadline for the current CF (that's why it's not listed).

    Maybe that's simple enough to be considered? (re-attaching here)
    I can't find any message that matches this description--from you, touches
    extensions in this way, and was submitted after the CF deadline. Help? If
    http://archives.postgresql.org/message-id/m2k44m9oyo.fsf@2ndQuadrant.fr
    that's something that ripples out to impacting how changes to upgraded
    extensions should look, that should get its own thread I think. This one is
    pretty deep and already far off its titled topic.
    Let's do that then. Well I'm just changing the subject here.
    --
    Dimitri Fontaine
    http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
  • Greg Smith at Feb 22, 2012 at 4:55 am
    I just took this for spin. Everything I tried worked, docs built and
    read fine. The description of how "dirty" differs from "written" is a
    bit cryptic, but I don't see an easy way to do better without a whole
    new section on that topic. Once the extension upgrade questions are
    sorted out, I'd say this is ready to commit. Example I have at the
    bottom here shows a case where this is a big improvement over the
    existing tracking. I think this is a must-have improvement if we're
    going to advocate using pg_stat_statements for more things.

    This works as expected in all of the EXPLAIN forms, I tried all of the
    supported formats. Sample of the text one:

    $ psql -d pgbench -c "EXPLAIN (ANALYZE,BUFFERS,FORMAT text) UPDATE
    pgbench_accounts SET aid=aid+0 WHERE aid<1000"
    QUERY PLAN
    ----------
    Update on pgbench_accounts (cost=0.00..86.09 rows=860 width=103)
    (actual time=8.587..8.587 rows=0 loops=1)
    Buffers: shared hit=8315 read=70 dirtied=16
    -> Index Scan using pgbench_accounts_pkey on pgbench_accounts
    (cost=0.00..86.09 rows=860 width=103) (actual time=0.017..2.086 rows=999
    loops=1)
    Index Cond: (aid < 1000)
    Buffers: shared hit=1828 read=28
    Total runtime: 8.654 ms

    Also ran just the UPDATE statement alone, then retrieved the counts from
    pg_stat_statements:

    $ psql -x -c "select * from pg_stat_statements"
    -[ RECORD 1
    ]-------+-------------------------------------------------------------------------------------------
    userid | 10
    dbid | 16385
    query | UPDATE pgbench_accounts SET aid=aid+0 WHERE aid<1000
    calls | 1
    total_time | 0.007475
    rows | 999
    shared_blks_hit | 8370
    shared_blks_read | 15
    shared_blks_dirtied | 15
    shared_blks_written | 0
    ...

    Note that there are no blocks shown as written there. That is also
    demonstrated by the results after some pgbench "-M prepared" stress
    testing against a small database. The pgbench tables are structured
    such that the number of branches < tellers << accounts. On a small
    scale database (I used 10 here), there might only be a single page of
    branch data. That shows up clearly in the different amount of dirtied
    blocks in each update:

    $ psql -x -c "select
    query,shared_blks_hit,shared_blks_read,shared_blks_dirtied,shared_blks_written
    from pg_stat_statements order by calls desc limit 7"

    ...
    query | UPDATE pgbench_branches SET bbalance = bbalance +
    $1 WHERE bid = $2;
    shared_blks_hit | 32929
    shared_blks_read | 0
    shared_blks_dirtied | 1
    shared_blks_written | 0

    query | UPDATE pgbench_tellers SET tbalance = tbalance +
    $1 WHERE tid = $2;
    shared_blks_hit | 19074
    shared_blks_read | 0
    shared_blks_dirtied | 7
    shared_blks_written | 0

    query | UPDATE pgbench_accounts SET abalance = abalance +
    $1 WHERE aid = $2;
    shared_blks_hit | 35563
    shared_blks_read | 9982
    shared_blks_dirtied | 4945
    shared_blks_written | 2812

    Note how in the branches and tellers case, the existing "written"
    counter shows 0. Those hot pages stay in cache the whole time with a
    high usage count, backends never get to write them out; only the
    checkpointer does. Only this new "dirtied" one reflects a useful write
    count for frequently used pages like that, and it does show that more
    pages are being touched by pgbench_tellers than pgbench_branches.

    I'd never ran into this before because I normally test against larger
    databases. But once I tried to find an example of this form, it was
    easy to do so. Systems where much of the database fits into
    shared_buffers in particular are likely to see a deceptively small write
    count.

    --
    Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
    PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
  • Robert Haas at Feb 23, 2012 at 1:36 am

    On Tue, Feb 21, 2012 at 11:55 PM, Greg Smith wrote:
    I just took this for spin.  Everything I tried worked, docs built and read
    fine.  The description of how "dirty" differs from "written" is a bit
    cryptic, but I don't see an easy way to do better without a whole new
    section on that topic.  Once the extension upgrade questions are sorted out,
    I'd say this is ready to commit.  Example I have at the bottom here shows a
    case where this is a big improvement over the existing tracking.  I think
    this is a must-have improvement if we're going to advocate using
    pg_stat_statements for more things.
    Glad you like it; committed.

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Greg Smith at Nov 28, 2011 at 8:53 am

    On 11/25/2011 08:39 AM, Alvaro Herrera wrote:
    I was about to commit this when I noticed that the avg values may not be
    all that representative of reality after all; consider that it's
    computed across the whole duration of the vacuuming operation, including
    the index scans ... it'd be possibly useful to keep separate timings for
    the heap scan (which is likely to use I/O more quickly) from index
    scans. That way you can tune for the max, not a possibly misleading
    average. That's a much larger change though, so I'm not going to get
    into it.
    Yes, that's one of the interesting additional things to track. At the
    point I was writing this originally, things like "Single Pass VACUUM"
    were still on the table; maybe it still is. Didn't seem worthwhile to
    try and get any more detailed than the average when the underlying work
    might still be whacked around. Just having some useful numbers on both
    costs and speeds for the first time was such an improvement I didn't
    want to get too far lost on chasing perfection here, at the risk of not
    getting anything done.

    One funny thing in the test I did was that the buffer count might add to
    a much larger amount than total disk pages:
    Good catch, I didn't think about the ramifications of multi-pass work
    here. I'll revisit that and see if I can do a bit better before the
    final 9.2 CommitFest; I'm not done with this area yet. I kind of needed
    this basic logging patch in place before it's easy to experiment with
    behavior changes and see what they do though.
  • Greg Smith at Oct 5, 2011 at 6:52 am

    On 09/26/2011 05:58 AM, Shigeru Hanada wrote:
    * Local variables added by the patch (secs, usecs, write_rate and
    endtime) can be moved into narrower scope.
    * Initializing starttime to zero seems unnecessary.
    Setting starttime to 0 is already in the code; the change made to that
    line was to add endtime, which is not initialized. You may be right that
    initializing it isn't necessary, but I'm sure not going to touch that
    part of the working code.

    You're right about the the local variables; they were placed to look
    like the surrounding code rather than to be as local as possible. I'm
    not sure if all the PostgreSQL code is completely consistent here; a
    quick survey shows examples of both "put all the variables at the top"
    and "make variables as local to blocks as possible" styles. I don't know
    that it really makes any difference with modern compilers, either. I'm
    sure someone else will have a stronger opinion on this subject now that
    I've trolled the list for one by writing this.
    In addition, documents about how to use the statistics would be
    necessary, maybe in "23.1.5. The Autovacuum Daemon".
    I'll set the status of this patch to waiting-on-author. Would you rebase
    the patch and post it again?
    I didn't do that because there's not really much documentation at this
    level of detail yet--how to interpret all the information in the logs.
    That's an open-ended bit of work; there's a lot more that could be
    written on this topic than the docs have right now. It's probably worth
    pointing out that this specific info is now in the logs, though, given
    that people might not notice it otherwise. I'll see what I can do about
    that.

    As a general FYI, rebasing is normally requested only when the existing
    patch doesn't apply anymore. If "patch" or "git apply" can consume it,
    complaints about code shifting around isn't by itself enough reason to
    ask for an updated patch.

    --
    Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
    PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppgsql-hackers @
categoriespostgresql
postedAug 17, '11 at 9:04p
activeFeb 23, '12 at 1:36a
posts44
users11
websitepostgresql.org...
irc#postgresql

People

Translate

site design / logo © 2022 Grokbase