Hi all....

I would like to start a dialogue and hear general feedback about the use
of constraint triggers in postgres (8.4.5).

Our overall issue is that using general triggers is causing slow inserts
(from locking issues) in our database. Here are some details:

A little background (jboss/j2ee/hibernate/linux).....
We have 3 basic triggers on a particular database table - one for
inserts, one for updates & another for deletes and they keep track of a
"granule count" that is used in reporting. This field (gracount) is
stored in another table called dataset. An example of the insert
trigger/function is as follows:

----------------------
CREATE TRIGGER increment_dataset_granule_count
AFTER INSERT
ON inventory
FOR EACH ROW
EXECUTE PROCEDURE increment_dataset_granule_count();

CREATE OR REPLACE FUNCTION increment_dataset_granule_count()
RETURNS trigger AS
$BODY$
DECLARE
BEGIN
IF NEW.visible_id != 5 THEN
UPDATE dataset
SET gracount = gracount + 1
WHERE dataset.inv_id = NEW.inv_id;
END IF;
RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION increment_dataset_granule_count() OWNER TO jboss;
-----------------------

What we found was that when these triggers were fired we saw locking
issues that slowed down performance dramatically to inserts into the
inventory table (the table where the triggers are located). You could
see the inserts sit and wait by looking at the pg_stat_activity table.

Within our app, the trigger was invoked within the same hibernate
transaction that a stateless session bean was using to persist/merge the
granule (inventory). Subsequently, in the same transaction, a EJB MBean
was merging the dataset changes creating kind of a "dead lock" on the
dataset table.

Our first try to solve this problem has been to convert these triggers
into a constraint trigger which allows for DEFERRABLE INITIALLY DEFERRED
flags. This, we are finding, is forcing the trigger function to run
after the triggering transaction is completed. We believe this will fix
our locking problem and hopefully speed up our inserts again.

Any comments or past experiences would certainly be helpful!

thanks,

Maria Wilson
NASA/Langley Research Center
Hampton, Virginia 23681

Search Discussions

  • Ben Chobot at Sep 27, 2011 at 5:00 am

    On Sep 26, 2011, at 10:52 AM, Maria L. Wilson wrote:

    Our first try to solve this problem has been to convert these triggers into a constraint trigger which allows for DEFERRABLE INITIALLY DEFERRED flags. This, we are finding, is forcing the trigger function to run after the triggering transaction is completed. We believe this will fix our locking problem and hopefully speed up our inserts again.

    Any comments or past experiences would certainly be helpful!
    My memory is fuzzy but as I recall, a possible downside to using deferred constraints was increased memory usage, though I cannot see how at the moment. Regardless, I think the upshot is that they aren't without their cost but as long as you aren't doing massive transactions that cost is probably one that you can afford to pay without much worry.
  • Craig Ringer at Sep 28, 2011 at 12:37 am

    On 09/27/2011 12:54 PM, Ben Chobot wrote:
    On Sep 26, 2011, at 10:52 AM, Maria L. Wilson wrote:

    Our first try to solve this problem has been to convert these triggers
    into a constraint trigger which allows for DEFERRABLE INITIALLY
    DEFERRED flags. This, we are finding, is forcing the trigger function
    to run after the triggering transaction is completed. We believe this
    will fix our locking problem and hopefully speed up our inserts again.

    Any comments or past experiences would certainly be helpful!
    My memory is fuzzy but as I recall, a possible downside to using
    deferred constraints was increased memory usage
    That's right. PostgreSQL doesn't currently support spilling of pending
    constraint information to disk; it has to keep it in RAM, and with
    sufficiently huge deferred updates/inserts/deletes it's possible for the
    backend to run out of RAM to use.
    though I cannot see how at the moment.
    A list of which triggers to run, and on which tuples, must be maintained
    until those triggers are fired. That list has to be kept somewhere.

    --
    Craig Ringer
  • Ben Chobot at Sep 30, 2011 at 3:12 am

    On Sep 27, 2011, at 6:37 PM, Craig Ringer wrote:
    On 09/27/2011 12:54 PM, Ben Chobot wrote:

    My memory is fuzzy but as I recall, a possible downside to using
    deferred constraints was increased memory usage
    That's right. PostgreSQL doesn't currently support spilling of pending constraint information to disk; it has to keep it in RAM, and with sufficiently huge deferred updates/inserts/deletes it's possible for the backend to run out of RAM to use.
    though I cannot see how at the moment.
    A list of which triggers to run, and on which tuples, must be maintained until those triggers are fired. That list has to be kept somewhere.
    Well when you put it like that, it's so obvious. :)

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppgsql-performance @
categoriespostgresql
postedSep 26, '11 at 5:17p
activeSep 30, '11 at 3:12a
posts4
users3
websitepostgresql.org
irc#postgresql

People

Translate

site design / logo © 2021 Grokbase