I'm evaluating pg for use in my company, and have run into a bit of a snag.
One of the tests I've been running is a loop of 10,000 "select *
from foo" statements from a perl program, where foo is:
Attribute | Type | Modifier
bar | integer |
zag | text |
When I initially ran this test on my workstation (500 mhz PIII, 128 meg
ram, debian 2.2 w/2.2.16 kernel) the whole process took around
10 seconds. After getting results from my select test, I did 10,000
updates (which took an average of 37 seconds), and then deleted the rows I'd
updated (from psql).
Now, when I rerun the "select" test (against the same data that was
there before the updates), it takes forever - results have
varied from 300-some seconds to over 700.
To make sure that the whole pg process wasn't screwed up, I created another
similar table and ran my 10,000 select script against it - and results are
back down to 10 seconds. So, it seems that somewhere in the process of
running a bunch of updates to "foo" (and deleteing them) things have
become screwed up.
What could be slowing selects against this table down, and how would
I proceed to investigate the matter further? I've been reading through
the pg docs, and haven't seen much performance monitoring other than
"explain" (which says exactly the same thing about both the fast and
slow tables). Is there a log somewhere, or a command that would further
show me what's going on?