Folks,

I'm trying to optimize the following query that performs KL Divergence [1]. As you can see the distance function operates on vectors of 150 floats.

The query takes 12 minutes to run on an idle (apart from pgsql) EC2 m1 large instance with 2 million documents in the docs table. The CPU is pegged at 100% during this time. I need to be able to both process concurrent distance queries and otherwise use the database.

I have the option of moving this distance calculation off of PG but are there other options?

Is there anything clearly wrong that I'm doing here?

Would it speed things up to make the float array a custom data type backed by C code?

Thanks in advance, Joel

[1] http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence

---

CREATE DOMAIN topics AS float[150];
CREATE DOMAIN doc_id AS varchar(64);

CREATE TABLE docs
(
id serial,
doc_id doc_id NOT NULL PRIMARY KEY,
topics topics NOT NULL
);

CREATE OR REPLACE FUNCTION docs_within_distance(vec topics, threshold float)
RETURNS TABLE(id doc_id, distance float) AS $$
BEGIN
RETURN QUERY
SELECT *
FROM (SELECT doc_id, (SELECT sum(vec[i] * ln(vec[i] / topics[i]))
FROM generate_subscripts(topics, 1) AS i
WHERE topics[i] > 0) AS distance
FROM docs) AS tab
WHERE tab.distance <= threshold;
END;
$$ LANGUAGE plpgsql;


--------------------------------------------------------------------------
- for hire: mac osx device driver ninja, kernel extensions and usb drivers
---------------------+------------+---------------------------------------
http://wagerlabs.com | @wagerlabs | http://www.linkedin.com/in/joelreymont
---------------------+------------+---------------------------------------

Search Discussions

  • Tom Lane at Apr 26, 2011 at 4:00 pm

    Joel Reymont writes:
    I'm trying to optimize the following query that performs KL Divergence [1]. As you can see the distance function operates on vectors of 150 floats.
    CREATE OR REPLACE FUNCTION docs_within_distance(vec topics, threshold float)
    RETURNS TABLE(id doc_id, distance float) AS $$
    BEGIN
    RETURN QUERY
    SELECT *
    FROM (SELECT doc_id, (SELECT sum(vec[i] * ln(vec[i] / topics[i]))
    FROM generate_subscripts(topics, 1) AS i
    WHERE topics[i] > 0) AS distance
    FROM docs) AS tab
    WHERE tab.distance <= threshold;
    END;
    $$ LANGUAGE plpgsql;
    Yikes. That sub-select is a mighty expensive way to compute the scalar
    product. Push it into a sub-function that takes the two arrays and
    iterates over them with a for-loop. For another couple orders of
    magnitude, convert the sub-function to C code. (I don't think you need
    a whole data type, just a function that does the scalar product.)

    regards, tom lane
  • Joel Reymont at Apr 27, 2011 at 7:00 am
    Tom,
    On Apr 26, 2011, at 5:00 PM, Tom Lane wrote:

    For another couple orders of magnitude, convert the sub-function to C code. (I don't think you need
    a whole data type, just a function that does the scalar product.)
    That's a 30x speedup, from 12 minutes down to 38s. Thanks Tom!

    --------------------------------------------------------------------------
    - for hire: mac osx device driver ninja, kernel extensions and usb drivers
    ---------------------+------------+---------------------------------------
    http://wagerlabs.com | @wagerlabs | http://www.linkedin.com/in/joelreymont
    ---------------------+------------+---------------------------------------
  • Tom Lane at Apr 27, 2011 at 2:06 pm

    Joel Reymont writes:
    On Apr 26, 2011, at 5:00 PM, Tom Lane wrote:
    For another couple orders of magnitude, convert the sub-function to C code. (I don't think you need
    a whole data type, just a function that does the scalar product.)
    That's a 30x speedup, from 12 minutes down to 38s. Thanks Tom!
    Huh, I would've bet on a lot more actually. The nodeFunctionscan and
    nodeAgg code must not be as inefficient as it looks on the surface ...

    regards, tom lane
  • Hitoshi Harada at May 19, 2011 at 8:28 am

    2011/4/27 Tom Lane <tgl@sss.pgh.pa.us>:
    Joel Reymont <joelr1@gmail.com> writes:
    On Apr 26, 2011, at 5:00 PM, Tom Lane wrote:
    For another couple orders of magnitude, convert the sub-function to C code.  (I don't think you need
    a whole data type, just a function that does the scalar product.)
    That's a 30x speedup, from 12 minutes down to 38s. Thanks Tom!
    Huh, I would've bet on a lot more actually.  The nodeFunctionscan and
    nodeAgg code must not be as inefficient as it looks on the surface ...
    Did you mean in that case you can optimize it by collapsing those
    nodes into one?

    Regards,

    --
    Hitoshi Harada
  • Tom Lane at May 19, 2011 at 1:31 pm

    Hitoshi Harada writes:
    2011/4/27 Tom Lane <tgl@sss.pgh.pa.us>:
    Joel Reymont <joelr1@gmail.com> writes:
    That's a 30x speedup, from 12 minutes down to 38s. Thanks Tom!
    Huh, I would've bet on a lot more actually.  The nodeFunctionscan and
    nodeAgg code must not be as inefficient as it looks on the surface ...
    Did you mean in that case you can optimize it by collapsing those
    nodes into one?
    No, just that the overhead of that code looks like it ought to be much
    more than the cost of a few multiplications ...

    regards, tom lane

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppgsql-general @
categoriespostgresql
postedApr 26, '11 at 2:58p
activeMay 19, '11 at 1:31p
posts6
users3
websitepostgresql.org
irc#postgresql

People

Translate

site design / logo © 2022 Grokbase