While testing triggers, I came across the following memory leak.
Here's a simple test case:

CREATE TABLE foo(a int);

CREATE OR REPLACE FUNCTION trig_fn() RETURNS trigger AS
$$
BEGIN
RETURN NEW;
END;
$$
LANGUAGE plpgsql;

CREATE TRIGGER ins_trig BEFORE INSERT ON foo
FOR EACH ROW EXECUTE PROCEDURE trig_fn();

INSERT INTO foo SELECT g
FROM generate_series(1, 5000000) AS g;

Memory usage goes up by around 100 bytes per row for the duration of the query.

The problem is that the trigger code assumes that anything it
allocates in the per-tuple memory context will be freed per-tuple
processed, which used to be the case because the loop in ExecutePlan()
calls ResetPerTupleExprContext() once each time round the loop, and
that used to correspond to once per tuple.

However, with the refactoring of that code out to nodeModifyTable.c,
this is no longer the case because the ModifyTable node processes all
the tuples from the subquery before returning, so I guess that the
loop in ExecModifyTable() needs to call ResetPerTupleExprContext()
each time round.

Regards,
Dean

Search Discussions

Discussion Posts

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 1 of 2 | next ›
Discussion Overview
grouppgsql-hackers @
categoriespostgresql
postedAug 18, '10 at 5:29p
activeAug 18, '10 at 8:52p
posts2
users2
websitepostgresql.org...
irc#postgresql

2 users in discussion

Tom Lane: 1 post Dean Rasheed: 1 post

People

Translate

site design / logo © 2022 Grokbase