If you use "pgbench -S -M prepared" at a scale where all data fits in
memory, most of what you are benchmarking is network/IPC chatter, and
table locking. Which is fine if that is what you want to do. This
patch adds a new transaction type of -P, which does the same thing as
-S but it moves the main loop of selects, 10,000 at at time, into
pl/pgSQL. This does a good job of exercising the executor rather than
This can simulate workloads that have primary key look ups as the
inner side of large nested loop. It is also useful for isolating and
profiling parts of the backend code.
I did not implement this as a new query mode (-M plpgsql), because the
lack of transaction control in pl/pgSQL means it can only be used for
select-only transactions rather than as a general method. So I
thought a new transaction type made more sense.
I didn't implement it as a custom file using -f because:
1) It seems to be a natural extension of the existing built-ins. Also
-f is fiddly. Several times I've wanted to ask posters who are
discussing the other built in transactions to run something like this
and report back, which is easier to do if it is also builtin.
2) It uses a initialization code which -f does not support.
3) I don't see how I can make it automatically detect and respond to
:scale if it were run under -f.
Perhaps issues 2 and 3 would be best addressed by extending the
general -f facility, but that would be a lot more work, and I don't
know how well received it would be.
The reporting might be an issue. I don't want to call it TPS when it
is really not a transaction being reported, so for now I've just left
the TPS as as true transactions, and added a separate reporting line
for selects per second.
I know I also need to add to the web-docs, but I'm hoping to wait on
that until I get some feedback on whether the whole approach is
considered to be viable or not.
some numbers for single client runs on 64-bit AMD Opteron Linux:
12,567 sps under -S
19,646 sps under -S -M prepared
58,165 sps under -P