In these (Postgres) mailing lists, top-posting is not preferred. See my
On Sun, Mar 27, 2011 at 1:01 PM, SUBHAM ROY wrote: On Sun, Mar 27, 2011 at 10:20 PM, Gurjeet Singh wrote:
On Sun, Mar 27, 2011 at 9:22 AM, SUBHAM ROY wrote:
I am currently a student of IIT Bombay. I am doing a project on
"Benchmark design". For that I need to measure the performance of various
queries in databases.
I want to know how can we measure the execution time of a query in
Postgres (Explain analyze will not do). Also is there any tools available in
Linux for measuring the performance of queries of databases such as Oracle
11g, Postgres, etc.
Any suggestions will be very helpful.
If EXPLAIN ANALYZE is not sufficient for your purpose, then you need
elaborate what exactly are you looking for.
Actually, I want to run some set of queries in postgres on a HUGE data
I have to compute the actual execution time for each of those queries. So
how can I do that in Postgres ?
Suppose in Oracle following thing can be done :
set timing on;
select stuff from mytab;
*Likewise, can I do this in Postgres? "set timing on" does not work in
Also is there any free tools available in Linux for doing so?
Postgres' command-line clinet is psql, and it has a meta-command
\timing (or \t for short)
This will show you the query execution time, including the network roundtrip
it takes to send the query and receive the response.
If you don't want to account for network round trips, the the server has a
GUC parameter called log_min_duration_statement . Either you change it's
value in postgresql.conf (so it affects all connections thereafter), or use
set log_min_duration_statement = 0;
as the first command in all your sessions to log all queries.
psql's \t command results in times being shown right in the psql session,
but setting log_min_duration_statement results in query and time taken beong
logged in server logs (usually under $PGDATA/pg_log/ or in syslog).
The Enterprise PostgreSQL Company