a surprisingly large amount of time to dump a schema with a large number
of views. The client's hardware is quite spiffy, and yet pg_dump is
taking many minutes to dump a schema with some 35,000 views. Here's a
simple test case:
create schema views;
do 'begin for i in 1 .. 10000 loop execute $$create view views.v_$$
$_$a$_$::text || n as t, n from generate_series(1,5) as n$$; end
i ||$$ as select current_date as d, current_timestamp as ts,
On my modest hardware this database took 4m18.864s for pg_dump to run.
Should we be looking at replacing the retail operations which consume
most of this time with something that runs faster?
There is also this gem of behaviour, which is where I started:
drop view foo;
with this error:
2011-10-01 16:38:20 EDT  30063 ERROR: could not open
relation with OID 133640
2011-10-01 16:38:20 EDT  30064 STATEMENT: SELECT
pg_catalog.pg_get_viewdef('133640'::pg_catalog.oid) AS viewdef
Of course, this isn't caused by having a large catalog, but it's
terrible nevertheless. I'm not sure what to do about it.