(See below for the reply)
On 10/05/13 22:48, Vitalii Tymchyshyn wrote:
Well, could you write a trigger that would do what you need? AFAIR
analyze data is stored no matter transaction boundaries. You could store
some counters in session vars and issue an explicit analyze when enough
rows were added.

7 трав. 2013 08:33, "Mark Kirkwood" <mark.kirkwood@catalyst.net.nz
напис.

On 07/05/13 18:10, Simon Riggs wrote:

On 7 May 2013 01:23, wrote:

I'm thinking that a variant of (2) might be simpler to
inplement:

(I think Matt C essentially beat me to this suggestion - he
originally
discovered this issue). It is probably good enough for only
*new* plans to
react to the increased/increasing number of in progress
rows. So this
would require backends doing significant numbers of row
changes to either
directly update pg_statistic or report their in progress
numbers to the
stats collector. The key change here is the partial
execution numbers
would need to be sent. Clearly one would need to avoid doing
this too
often (!) - possibly only when number of changed rows >
autovacuum_analyze_scale___factor proportion of the relation
concerned or
similar.


Are you loading using COPY? Why not break down the load into chunks?


INSERT - but we could maybe workaround by chunking the INSERT.
However that *really* breaks the idea that in SQL you just say what
you want, not how the database engine should do it! And more
practically means that the most obvious and clear way to add your
new data has nasty side effects, and you have to tip toe around
muttering secret incantations to make things work well :-)

I'm still thinking that making postgres smarter about having current
stats for getting the actual optimal plan is the best solution.
Unfortunately a trigger will not really do the job - analyze ignores in
progress rows (unless they were added by the current transaction), and
then the changes made by analyze are not seen by any other sessions. So
no changes to plans until the entire INSERT is complete and COMMIT
happens (which could be a while - too long in our case).

Figuring out how to improve on this situation is tricky.


Cheers

Mark

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 23 of 33 | next ›
Discussion Overview
grouppgsql-performance @
categoriespostgresql
postedApr 26, '13 at 2:33a
activeJul 13, '13 at 9:29p
posts33
users10
websitepostgresql.org
irc#postgresql

People

Translate

site design / logo © 2021 Grokbase