While testing triggers, I came across the following memory leak.
Here's a simple test case:
CREATE TABLE foo(a int);
CREATE OR REPLACE FUNCTION trig_fn() RETURNS trigger AS
CREATE TRIGGER ins_trig BEFORE INSERT ON foo
FOR EACH ROW EXECUTE PROCEDURE trig_fn();
INSERT INTO foo SELECT g
FROM generate_series(1, 5000000) AS g;
Memory usage goes up by around 100 bytes per row for the duration of the query.
The problem is that the trigger code assumes that anything it
allocates in the per-tuple memory context will be freed per-tuple
processed, which used to be the case because the loop in ExecutePlan()
calls ResetPerTupleExprContext() once each time round the loop, and
that used to correspond to once per tuple.
However, with the refactoring of that code out to nodeModifyTable.c,
this is no longer the case because the ModifyTable node processes all
the tuples from the subquery before returning, so I guess that the
loop in ExecModifyTable() needs to call ResetPerTupleExprContext()
each time round.