FAQ
I am wondering If I am missing something obvious. If not, I have a suggestion for plpgsql.

Stored procedures can accept rows.
Libpq can receive rows (PQResult).

Wouldn't it be a great interface if PQResult was "bi-directional"? Create a result set on the client then call the database with a command.

Perhaps...
PQinsert(PQResult,"schema.table"); //iterate thru rows inserting
PQupdate(PQResult,"schema.table"); //iterate thru rows updateing

PQexec(connection,"scheme.function",PQResult) //iterate thru rows passing row as arg to stored procedure.

Search Discussions

  • Andrew Chernow at Jun 19, 2011 at 4:39 pm

    On 6/19/2011 11:04 AM, Jeff Shanab wrote:
    I am wondering If I am missing something obvious. If not, I have a suggestion
    for plpgsql.

    Stored procedures can accept rows.

    Libpq can receive rows (PQResult).

    Wouldn’t it be a great interface if PQResult was “bi-directional”? Create a
    result set on the client then call the database with a command.

    Perhaps…

    PQinsert(PQResult,”schema.table”); //iterate thru rows inserting

    PQupdate(PQResult,”schema.table”); //iterate thru rows updateing

    PQexec(connection,”scheme.function”,PQResult) //iterate thru rows passing row as
    arg to stored procedure.
    Have you looked into libpqtypes? It allows you to pack nested structures/arrays
    and pass them as query/function parameters.

    http://pgfoundry.org/projects/libpqtypes/
    http://libpqtypes.esilo.com/ (docs)

    --
    Andrew Chernow
    eSilo, LLC
    every bit counts
    http://www.esilo.com/
  • Dmitriy Igrishin at Jun 19, 2011 at 7:37 pm
    Hey Jeff,

    2011/6/19 Jeff Shanab <jshanab@smartwire.com>
    I am wondering If I am missing something obvious. If not, I have a
    suggestion for plpgsql.****

    ** **

    Stored procedures can accept rows.****

    Libpq can receive rows (PQResult).****

    ** **

    Wouldn’t it be a great interface if PQResult was “bi-directional”? Create a
    result set on the client then call the database with a command. ****

    ** **

    Perhaps…****

    PQinsert(PQResult,”schema.table”); //iterate thru rows inserting****

    PQupdate(PQResult,”schema.table”); //iterate thru rows updateing
    IMO, mapping C functions to SQL operators is bad idea.
    If I understood you correctly, you want to make libpq ORM. But
    without implementing a functional like C++ virtual functions on
    the _backend_ side, it is impossible or ugly.

    --
    // Dmitriy.
  • Robert Haas at Jun 20, 2011 at 1:08 am

    On Sun, Jun 19, 2011 at 11:04 AM, Jeff Shanab wrote:
    I am wondering If I am missing something obvious. If not, I have a
    suggestion for plpgsql.

    Stored procedures can accept rows.

    Libpq can receive rows (PQResult).

    Wouldn’t it be a great interface if PQResult was “bi-directional”? Create a
    result set on the client then call the database with a command.
    For insert, we have something like this already - this is what "copy" is for.

    For update, it's a bit more complex - we don't have a "replace into" operator...

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Merlin Moncure at Jun 21, 2011 at 7:55 pm

    On Sun, Jun 19, 2011 at 8:08 PM, Robert Haas wrote:
    On Sun, Jun 19, 2011 at 11:04 AM, Jeff Shanab wrote:
    I am wondering If I am missing something obvious. If not, I have a
    suggestion for plpgsql.

    Stored procedures can accept rows.

    Libpq can receive rows (PQResult).

    Wouldn’t it be a great interface if PQResult was “bi-directional”? Create a
    result set on the client then call the database with a command.
    For insert, we have something like this already - this is what "copy" is for.
    'copy' is a *bulk* insert statement -- it's great for the very
    specific case when you are dumbly stuffing data into the database,
    especially if performance is critical and sane error handling is not.
    It is not suitable for anything else: feeding data into functions,
    update/upsert/delete, insert with join, pre-post process, etc. Also
    copy runs through libpq textually at the line level, not at the field
    level like the rest of libpq.
    For update, it's a bit more complex - we don't have a "replace into" operator...
    Actually, we do. 9.1 supports data modifying CTE around which it's
    possible to rig a perfectly reasonable upsert...barring that, you
    could trivially do something similar in a hand rolled backend upsert
    function that takes a row or a set of rows (fed in as a composite
    array). Point being, the server has the necessary features -- it's
    the client that's the (solved) problem. At the risk of sounding
    'broken record repetitive', let me echo andrew's comment upthread that
    libpqtypes solves the OP's problem completely in a very elegant way.
    The basic M.O. is to:

    1. register the type you are using for transport (can either be the
    table or a composite type)
    2. for each record you want to send, PQputf that record, and if you
    are sending more than one, PQputf the record into it's array
    3. PQparamExec() a query that might look like one of:

    /* straight up insert */
    PQparamExec(conn, param, "INSERT INTO foo SELECT (unnest(%foo[])).*
    FROM f", resfmt);

    /* send to function */
    PQparamExec(conn, param, "SELECT do_stuff(%foo[]) ", resfmt);

    /* upsert -- pre 9.1 this could be done in plpgsql loop, etc */
    WITH foos AS (SELECT (UNNEST(%foo[])).*)
    updated as (UPDATE foo SET foo.a = foos.a ... RETURNING foo.id)
    INSERT INTO foo SELECT foos.* FROM foos LEFT JOIN updated USING(id)
    WHERE updated.id IS NULL;

    Basically, the trick is to exploit the server's composite array type
    features on the client side to do exactly what the OP is gunning for.
    You can send anything from simple arrays to entire complex nested
    structures that way -- although the complex stuff would typically go a
    to a function. Performance wise, it's faster than traditional query
    methods (everything is sent in binary) but slower than 'copy'.

    merlin
  • Robert Haas at Jul 5, 2011 at 2:09 pm

    On Tue, Jun 21, 2011 at 3:55 PM, Merlin Moncure wrote:
    For update, it's a bit more complex - we don't have a "replace into" operator...
    Actually, we do. 9.1 supports data modifying CTE around which it's
    possible to rig a perfectly reasonable upsert...barring that, you
    could trivially do something similar in a hand rolled backend upsert
    function that takes a row or a set of rows (fed in as a composite
    array).
    I don't believe that any of the solutions we have today are guaranteed
    to behave correctly in the face of concurrent activity. Because of
    the way snapshot isolation works, you can try to update an existing
    record, find that there isn't one, and then fail when you go to insert
    because some other backend has meanwhile inserted one that isn't
    visible to your snapshot. Doing the operations in the other order is
    no better.

    I'm not saying this is the biggest problem in the entire world, but I
    do think it's a non-imaginary problem.

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppgsql-hackers @
categoriespostgresql
postedJun 19, '11 at 3:11p
activeJul 5, '11 at 2:09p
posts6
users5
websitepostgresql.org...
irc#postgresql

People

Translate

site design / logo © 2022 Grokbase