Grokbase Groups Cayenne dev May 2009
FAQ
Now when I know of this JDBC feature, I'd prefer to have it in Cayenne
sooner that later. We would also want to have it for other queries than just
only SelectQuery. Could someone help me testing it against different types
of database if I commit?

2009/5/20 stefcl <stefatwork@gmail.com>
I have posted a small fix in the jira.


I added the following in SelectQuery.java :
***************************************
//quick fix : property for setting the jdbc fetch size

protected Integer cursorFetchSize;

/**
* Gets the JDBC fetch size for this query.
* @return the fetch size defined for this query, null if not set.
*/
public Integer getCursorFetchSize()
{
return cursorFetchSize;
}


/**
* Sets the JDBC fetch size for this query.
* @param cursorFetchSize the desired fetch size, or null to reset
it.
*/
public void setCursorFetchSize(Integer cursorFetchSize)
{
this.cursorFetchSize = cursorFetchSize;
}

//end quick fix
************************************

And modified SelectAction.java to add the check in performAction :
public void performAction(Connection connection, OperationObserver
observer)
throws SQLException, Exception {

long t1 = System.currentTimeMillis();

SelectTranslator translator = createTranslator(connection);
PreparedStatement prepStmt = translator.createStatement();

//quick fix : sets the fetch size
if( query.getCursorFetchSize() != null )
prepStmt.setFetchSize( query.getCursorFetchSize() );
//quick fix end

ResultSet rs = prepStmt.executeQuery();
QueryMetadata md = query.getMetaData(getEntityResolver());

(...)

********************************************************




--
View this message in context:
http://www.nabble.com/Big-selects-on-PostGres-%3A-Configuring-Statement.setFetchSize%28%29-of-a-selectquery-tp23555842p23635409.html
Sent from the Cayenne - User mailing list archive at Nabble.com.

Search Discussions

  • Andrus Adamchik at May 21, 2009 at 1:11 pm
    Cool :-)
    On May 21, 2009, at 3:59 PM, Andrey Razumovsky wrote:

    Now when I know of this JDBC feature, I'd prefer to have it in Cayenne
    sooner that later. We would also want to have it for other queries
    than just
    only SelectQuery.
    Absolutely. This has to be a part of the QueryMetadata on the backend.
    On the frontend any query that can potentially select data should have
    a corresponding setter.
    Could someone help me testing it against different types
    of database if I commit?
    I can test it on almost all DB's that we support. Of course we should
    have Cayenne unit tests that will provide regression (i.e. driver XYZ
    doesn't throw UnsupportedOperationException when we call a
    corresponding JDBC method).

    In addition to that I'd like to see if there's really memory/speed
    savings when using that (i.e. is it really worth it). For that I
    suggest writing a JDBC test outside of Cayenne, that can be run in
    profiler against different DB's.

    Andrus
  • Andrey Razumovsky at May 24, 2009 at 9:52 am
    I should say my tests on Postgres and mysql didn't show any results. If I
    use small heap size, i get OutOfMemory no matter which fetch size was set.
    Fetch speed and memory usage are the same. Seems drivers just ignore this
    parameter. Stephane, did your workaround help you?

    Drivers of hsql, postres, mysql didn't throw any exceptions.. Maybe other
    DBMS drivers work well with this param. I'm going to commit today or
    tomorrow. Nothing will break anything if we add this ability.

    Andrey

    2009/5/21 Andrus Adamchik <andrus@objectstyle.org>
    Cool :-)

    On May 21, 2009, at 3:59 PM, Andrey Razumovsky wrote:

    Now when I know of this JDBC feature, I'd prefer to have it in Cayenne
    sooner that later. We would also want to have it for other queries than
    just
    only SelectQuery.
    Absolutely. This has to be a part of the QueryMetadata on the backend. On
    the frontend any query that can potentially select data should have a
    corresponding setter.

    Could someone help me testing it against different types
    of database if I commit?
    I can test it on almost all DB's that we support. Of course we should have
    Cayenne unit tests that will provide regression (i.e. driver XYZ doesn't
    throw UnsupportedOperationException when we call a corresponding JDBC
    method).

    In addition to that I'd like to see if there's really memory/speed savings
    when using that (i.e. is it really worth it). For that I suggest writing a
    JDBC test outside of Cayenne, that can be run in profiler against different
    DB's.

    Andrus
  • Andrus Adamchik at May 25, 2009 at 6:38 am
    Somehow I am not surprised. I may also try that on Oracle when the
    code becomes available on trunk.

    (BTW not sure that Stephane is subscribed to the dev list, so cc'ying
    this message.)

    Andrus
    On May 24, 2009, at 12:52 PM, Andrey Razumovsky wrote:
    I should say my tests on Postgres and mysql didn't show any results.
    If I
    use small heap size, i get OutOfMemory no matter which fetch size
    was set.
    Fetch speed and memory usage are the same. Seems drivers just ignore
    this
    parameter. Stephane, did your workaround help you?

    Drivers of hsql, postres, mysql didn't throw any exceptions.. Maybe
    other
    DBMS drivers work well with this param. I'm going to commit today or
    tomorrow. Nothing will break anything if we add this ability.

    Andrey

    2009/5/21 Andrus Adamchik <andrus@objectstyle.org>
    Cool :-)

    On May 21, 2009, at 3:59 PM, Andrey Razumovsky wrote:

    Now when I know of this JDBC feature, I'd prefer to have it in
    Cayenne
    sooner that later. We would also want to have it for other queries
    than
    just
    only SelectQuery.
    Absolutely. This has to be a part of the QueryMetadata on the
    backend. On
    the frontend any query that can potentially select data should have a
    corresponding setter.

    Could someone help me testing it against different types
    of database if I commit?
    I can test it on almost all DB's that we support. Of course we
    should have
    Cayenne unit tests that will provide regression (i.e. driver XYZ
    doesn't
    throw UnsupportedOperationException when we call a corresponding JDBC
    method).

    In addition to that I'd like to see if there's really memory/speed
    savings
    when using that (i.e. is it really worth it). For that I suggest
    writing a
    JDBC test outside of Cayenne, that can be run in profiler against
    different
    DB's.

    Andrus
  • Stefcl at May 25, 2009 at 9:27 am
    Hello,

    Setting fetchSize with postgres does make a difference, even in a simple
    JDBC select.
    If you consider the following code copy-pasted from my test case:

    PreparedStatement stmt =
    sqlMapper.getCurrentConnection().prepareStatement("SELECT
    itemID,sellingprice,margin,popularityrating,name,att1,att2,att3,att4,att5,longtailpos,timest
    FROM itemdetails WHERE profileID= 10 ORDER BY longtailpos");

    stmt.setFetchSize(1000);
    stmt.execute();
    ResultSet set = stmt.executeQuery();

    int i = 0;


    while( set.next())
    {
    i++;
    System.out.println(set.getString(1));
    }


    It can iterates through approx 1'500'000 rows without any memory issues
    (java process stays at approx 20mo while the loop executes). I can also see
    that it takes no more than a few seconds before the first row is printed in
    the output console, meaning that results are still being fetched from the DB
    at the moment the resultset is accessed.

    Now if I comment the line "stmt.setFetchSize(1000);" in the above code,
    execution stops at "ResultSet set = stmt.executeQuery();" for approx 15
    seconds and then an OutOfMemoryException is thrown from JDBC code. Also in
    cayenne code, that would be before the resultset is even accessed (making
    ResultIterator useless).

    A similar problem is exposed here, with a sample stacktrace :
    http://www.nabble.com/java.sql.Statement-generates-java.lang.OutOfMemoryError-in-big-tabe-td21568763.html

    And the reason is detailed here :
    http://jdbc.postgresql.org/documentation/83/query.html#query-with-cursor

    Perhaps other drivers that automatically use server side cursors aren't
    affected or just ignore this setting but I can say for sure that this
    setting is necessary with postGres.

    Andrey, you say you keep getting OutOfMemory exceptions no matter which
    value you set. Do you get these exceptions while iterating through the
    results or during the call to executeQuery?
    If you get it while looping through the resultset, it could mean that
    somehow the rows objects aren't garbage collected, it can happen with
    cayenne version 2.0 due to the dataContext holding strong references to the
    dataObjects. I didn't want to write a fix for v3 at first, but I realized
    that in v2, I had to manually unregister the dataObjects from the context in
    order to keep the memory usage in safe areas.

    If I can be of any help, please ask.
    Best regards...

    Stéphane
    --
    View this message in context: http://www.nabble.com/Re%3A-Big-selects-on-PostGres-%3A-Configuring-Statement.setFetchSize%28%29-of--a-selectquery-tp23652781p23703577.html
    Sent from the Cayenne - Dev mailing list archive at Nabble.com.
  • Andrus Adamchik at May 25, 2009 at 9:22 am
    [Forwarding to the dev list. Stéphane, please forward your replies
    here as well or subscribe to dev if you feel like it]

    Stéphane, thanks for providing this information. It is encouraging.

    The mismatch in the results between you and Andrey is exactly why I
    wanted to do real profiling, as we need to see how the memory
    allocation changes in the process. OutOfMemory may or may not happen
    depending on many factors. What we care about here is the dynamics of
    ResultSet reading.

    I see that Andrey has already committed this code to trunk, so now we
    have something to play with. If you don't mind using a trunk build
    (doing it yourself or grabbing one from Hudson), you can give us the
    feedback on the implementation.

    Andrus


    Begin forwarded message:
    From: Stéphane Claret <stefatwork@gmail.com>
    Date: May 25, 2009 11:58:52 AM GMT+03:00
    To: Andrus Adamchik <andrus@objectstyle.org>
    Subject: Re: Big selects on PostGres : Configuring
    Statement.setFetchSize() of a selectquery

    Hello,

    To answer, setting fetchSize with postgres does make a difference.
    If you consider the following code copy-pasted from my test case:

    PreparedStatement stmt =
    sqlMapper.getCurrentConnection().prepareStatement("SELECT
    itemID
    ,sellingprice
    ,margin
    ,popularityrating,name,att1,att2,att3,att4,att5,longtailpos,timest
    FROM itemdetails WHERE profileID= 10 ORDER BY longtailpos");

    stmt.setFetchSize(1000);
    stmt.execute();
    ResultSet set = stmt.executeQuery();

    int i = 0;


    while( set.next())
    {
    i++;
    System.out.println(set.getString(1));
    }


    It can iterates through approx 1'500'000 rows without any memory
    issues (java process stays at approx 20mo while the loop executes).
    I can also see that it takes no more than a few seconds before the
    first row is printed in the output console, meaning that results are
    still being fetched from the DB at the moment the resultset is
    accessed.

    Now if I comment the line "stmt.setFetchSize(1000);" in the above
    code, execution stops at "ResultSet set = stmt.executeQuery();" for
    approx 15 seconds and then an OutOfMemoryException is thrown from
    JDBC code. Also in cayenne code, that would be before the resultset
    is even accessed (making ResultIerator useless).

    A similar problem is exposed here, with a sample stacktrace :
    http://www.nabble.com/java.sql.Statement-generates-java.lang.OutOfMemoryError-in-big-tabe-td21568763.html

    And the reason is detailed here :
    http://jdbc.postgresql.org/documentation/83/query.html#query-with-cursor

    Perhaps other drivers that automatically use server side cursors
    aren't affected or just ignore this setting but I can say for sure
    that this setting is necessary with postGres.

    Andrey, you say you keep getting OutOfMemory exceptions no matter
    which value you set. Do you get these exceptions while iterating
    through the results or during the call to executeQuery?
    If you get it while looping through the resultset, it could mean
    that somehow the rows objects aren't garbage collected, it can
    happen with cayenne version 2.0 due to the dataContext holding
    strong references to the dataObjects. I didn't want to write a fix
    for v3 at first, but I realized that in v2, I had to manually
    unregister the dataObjects from the context in order to keep the
    memory usage in safe areas.

    If I can be of any help, please ask.
    Best regards...

    Stéphane


    2009/5/25 Andrus Adamchik <andrus@objectstyle.org>
    Somehow I am not surprised. I may also try that on Oracle when the
    code becomes available on trunk.

    (BTW not sure that Stephane is subscribed to the dev list, so
    cc'ying this message.)

    Andrus

    On May 24, 2009, at 12:52 PM, Andrey Razumovsky wrote:
    I should say my tests on Postgres and mysql didn't show any results.
    If I
    use small heap size, i get OutOfMemory no matter which fetch size
    was set.
    Fetch speed and memory usage are the same. Seems drivers just ignore
    this
    parameter. Stephane, did your workaround help you?

    Drivers of hsql, postres, mysql didn't throw any exceptions.. Maybe
    other
    DBMS drivers work well with this param. I'm going to commit today or
    tomorrow. Nothing will break anything if we add this ability.

    Andrey

    2009/5/21 Andrus Adamchik <andrus@objectstyle.org>

    Cool :-)

    On May 21, 2009, at 3:59 PM, Andrey Razumovsky wrote:

    Now when I know of this JDBC feature, I'd prefer to have it in Cayenne
    sooner that later. We would also want to have it for other queries
    than
    just
    only SelectQuery.


    Absolutely. This has to be a part of the QueryMetadata on the
    backend. On
    the frontend any query that can potentially select data should have a
    corresponding setter.

    Could someone help me testing it against different types
    of database if I commit?


    I can test it on almost all DB's that we support. Of course we
    should have
    Cayenne unit tests that will provide regression (i.e. driver XYZ
    doesn't
    throw UnsupportedOperationException when we call a corresponding JDBC
    method).

    In addition to that I'd like to see if there's really memory/speed
    savings
    when using that (i.e. is it really worth it). For that I suggest
    writing a
    JDBC test outside of Cayenne, that can be run in profiler against
    different
    DB's.

    Andrus

  • Andrey Razumovsky at May 25, 2009 at 9:39 am
    I'm using latest postgres (or mysql) driver. To make a test, I put 1M
    records in db table, then do "select * from that_table". No matter which
    fetch size I tried to set, at the moment of PreparedStastement.execute()
    something about 70-100Mb are allocated (I don't actually remember the
    quantity right now). So, if heap size is small, I get OutOfMemory.
    Otherwise, it's all ok and rows can be quickly iterated. Looking at code on
    postgres site, the only difference is that I didn't bother with autocommit
    mode. I'm not sure Cayenne does, too... Maybe this is the reason why I
    didn't get positive results.

    Andrey

    2009/5/25 Andrus Adamchik [Forwarding to the dev list. Stéphane, please forward your replies here as
    well or subscribe to dev if you feel like it]

    Stéphane, thanks for providing this information. It is encouraging.

    The mismatch in the results between you and Andrey is exactly why I wanted
    to do real profiling, as we need to see how the memory allocation changes in
    the process. OutOfMemory may or may not happen depending on many factors.
    What we care about here is the dynamics of ResultSet reading.

    I see that Andrey has already committed this code to trunk, so now we have
    something to play with. If you don't mind using a trunk build (doing it
    yourself or grabbing one from Hudson), you can give us the feedback on the
    implementation.

    Andrus


    Begin forwarded message:
    From: Stéphane Claret <stefatwork@gmail.com>
    Date: May 25, 2009 11:58:52 AM GMT+03:00
    To: Andrus Adamchik <andrus@objectstyle.org>
    Subject: Re: Big selects on PostGres : Configuring
    Statement.setFetchSize() of a selectquery

    Hello,

    To answer, setting fetchSize with postgres does make a difference.
    If you consider the following code copy-pasted from my test case:

    PreparedStatement stmt =
    sqlMapper.getCurrentConnection().prepareStatement("SELECT
    itemID,sellingprice,margin,popularityrating,name,att1,att2,att3,att4,att5,longtailpos,timest
    FROM itemdetails WHERE profileID= 10 ORDER BY longtailpos");

    stmt.setFetchSize(1000);
    stmt.execute();
    ResultSet set = stmt.executeQuery();

    int i = 0;


    while( set.next())
    {
    i++;
    System.out.println(set.getString(1));
    }


    It can iterates through approx 1'500'000 rows without any memory issues
    (java process stays at approx 20mo while the loop executes). I can also see
    that it takes no more than a few seconds before the first row is printed in
    the output console, meaning that results are still being fetched from the DB
    at the moment the resultset is accessed.

    Now if I comment the line "stmt.setFetchSize(1000);" in the above code,
    execution stops at "ResultSet set = stmt.executeQuery();" for approx 15
    seconds and then an OutOfMemoryException is thrown from JDBC code. Also in
    cayenne code, that would be before the resultset is even accessed (making
    ResultIerator useless).

    A similar problem is exposed here, with a sample stacktrace :

    http://www.nabble.com/java.sql.Statement-generates-java.lang.OutOfMemoryError-in-big-tabe-td21568763.html

    And the reason is detailed here :
    http://jdbc.postgresql.org/documentation/83/query.html#query-with-cursor

    Perhaps other drivers that automatically use server side cursors aren't
    affected or just ignore this setting but I can say for sure that this
    setting is necessary with postGres.

    Andrey, you say you keep getting OutOfMemory exceptions no matter which
    value you set. Do you get these exceptions while iterating through the
    results or during the call to executeQuery?
    If you get it while looping through the resultset, it could mean that
    somehow the rows objects aren't garbage collected, it can happen with
    cayenne version 2.0 due to the dataContext holding strong references to the
    dataObjects. I didn't want to write a fix for v3 at first, but I realized
    that in v2, I had to manually unregister the dataObjects from the context in
    order to keep the memory usage in safe areas.

    If I can be of any help, please ask.
    Best regards...

    Stéphane


    2009/5/25 Andrus Adamchik <andrus@objectstyle.org>

    Somehow I am not surprised. I may also try that on Oracle when the code
    becomes available on trunk.

    (BTW not sure that Stephane is subscribed to the dev list, so cc'ying this
    message.)

    Andrus

    On May 24, 2009, at 12:52 PM, Andrey Razumovsky wrote:
    I should say my tests on Postgres and mysql didn't show any results. If I
    use small heap size, i get OutOfMemory no matter which fetch size was set.
    Fetch speed and memory usage are the same. Seems drivers just ignore this
    parameter. Stephane, did your workaround help you?

    Drivers of hsql, postres, mysql didn't throw any exceptions.. Maybe other
    DBMS drivers work well with this param. I'm going to commit today or
    tomorrow. Nothing will break anything if we add this ability.

    Andrey

    2009/5/21 Andrus Adamchik <andrus@objectstyle.org>

    Cool :-)

    On May 21, 2009, at 3:59 PM, Andrey Razumovsky wrote:

    Now when I know of this JDBC feature, I'd prefer to have it in Cayenne
    sooner that later. We would also want to have it for other queries than
    just
    only SelectQuery.


    Absolutely. This has to be a part of the QueryMetadata on the backend. On
    the frontend any query that can potentially select data should have a
    corresponding setter.

    Could someone help me testing it against different types
    of database if I commit?


    I can test it on almost all DB's that we support. Of course we should have
    Cayenne unit tests that will provide regression (i.e. driver XYZ doesn't
    throw UnsupportedOperationException when we call a corresponding JDBC
    method).

    In addition to that I'd like to see if there's really memory/speed savings
    when using that (i.e. is it really worth it). For that I suggest writing a
    JDBC test outside of Cayenne, that can be run in profiler against
    different
    DB's.

    Andrus


  • Stefcl at May 25, 2009 at 10:13 am
    Here's the code I'm currently using, it's based on the v3 with generics
    version I got from the repo a few days earlier :

    DataContext context = DataContext.createDataContext();

    SelectQuery query = new SelectQuery(ItemDetail.class);

    //query.andQualifier( ExpressionFactory.matchDbExp(
    ItemDetail.PROFILEID_PK_COLUMN, 10 ) );
    //query.andQualifier( ExpressionFactory.likeExp(
    ItemDetail.NAME_PROPERTY, "G%")) ;
    query.addOrdering(ItemDetail.NAME_PROPERTY, Ordering.ASC );

    query.setCursorFetchSize( 1000 );

    ResultIterator iterator = context.performIteratedQuery(query);

    while( iterator.hasNextRow())
    {
    ItemDetail detail = (ItemDetail)context.objectFromDataRow(
    ItemDetail.class, (DataRow)iterator.nextRow(), false);
    System.out.println( detail.getName() + "\t\t\t\t" +
    detail.getKeywords() );
    }

    It works...
    If I remove the fetchSize parameter, it fails with following stackTrace :

    Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at java.lang.Class.getDeclaredFields0(Native Method)
    at java.lang.Class.privateGetDeclaredFields(Class.java:2291)
    at java.lang.Class.getDeclaredField(Class.java:1880)
    at
    java.util.concurrent.atomic.AtomicReferenceFieldUpdater$AtomicReferenceFieldUpdaterImpl.(AtomicReferenceFieldUpdater.java:65)
    at java.sql.SQLException.(QueryExecutorImpl.java:1325)
    at
    org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:194)
    at
    org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451)
    at
    org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350)
    at
    org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:254)
    at
    org.apache.cayenne.access.jdbc.SelectAction.performAction(SelectAction.java:78)
    at
    org.apache.cayenne.access.DataNodeQueryAction.runQuery(DataNodeQueryAction.java:87)
    at
    org.apache.cayenne.access.DataNode.performQueries(DataNode.java:274)
    at
    org.apache.cayenne.access.DataDomainLegacyQueryAction.execute(DataDomainLegacyQueryAction.java:79)
    at
    org.apache.cayenne.access.DataDomain$1.transform(DataDomain.java:713)
    at
    org.apache.cayenne.access.DataDomain.runInTransaction(DataDomain.java:833)
    at
    org.apache.cayenne.access.DataDomain.performQueries(DataDomain.java:710)
    at
    org.apache.cayenne.access.DataContext.internalPerformIteratedQuery(DataContext.java:1213)
    at
    org.apache.cayenne.access.DataContext.performIteratedQuery(DataContext.java:1179)
    at cayenne.CayenneMain.main(CayenneMain.java:43)


    I'm surprised that you get different results... I use the lastest 8.3.604
    driver from jdbc.
    Otherwise, looking at cayenne logs it seems that a transaction is being
    started :

    INFO: --- will run 1 query.
    25 mai 2009 11:58:35 org.apache.cayenne.access.QueryLogger logConnect
    INFO: Opening connection: jdbc:postgresql://localhost/prediggo_db
    Login: postgres
    Password: *******
    25 mai 2009 11:58:35 org.apache.cayenne.access.QueryLogger logConnectSuccess
    INFO: +++ Connecting: SUCCESS.
    25 mai 2009 11:58:35 org.apache.cayenne.access.QueryLogger
    logBeginTransaction
    INFO: --- transaction started.
    25 mai 2009 11:58:36 org.apache.cayenne.access.QueryLogger log
    INFO: Detected and installed adapter:
    org.apache.cayenne.dba.postgres.PostgresAdapter
    25 mai 2009 11:58:36 org.apache.cayenne.access.QueryLogger logQuery
    INFO: SELECT t0.timest, t0.keywords,


    If you want, I can test your implementation with my test case but I can't
    afford to spend much time setting up the build environment (jdk 1.5 + maven
    stuff)... If I could get compiled binaries somewhere...



    Andrey Razumovsky wrote:
    I'm using latest postgres (or mysql) driver. To make a test, I put 1M
    records in db table, then do "select * from that_table". No matter which
    fetch size I tried to set, at the moment of PreparedStastement.execute()
    something about 70-100Mb are allocated (I don't actually remember the
    quantity right now). So, if heap size is small, I get OutOfMemory.
    Otherwise, it's all ok and rows can be quickly iterated. Looking at code
    on
    postgres site, the only difference is that I didn't bother with autocommit
    mode. I'm not sure Cayenne does, too... Maybe this is the reason why I
    didn't get positive results.

    Andrey
    --
    View this message in context: http://www.nabble.com/Fwd%3A-Big-selects-on-PostGres-%3A-Configuring-Statement.setFetchSize%28%29-of--a-selectquery-tp23703746p23704583.html
    Sent from the Cayenne - Dev mailing list archive at Nabble.com.
  • Andrey Razumovsky at May 25, 2009 at 10:18 am
    Well, I tested only with pure JDBC, maybe Cayenne sets some enviroment
    correctly. It'd be great if you tested committed code. You can get compiled
    binaries from Hudson:
    http://hudson.zones.apache.org/hudson/job/Cayenne-trunk/403/

    2009/5/25 stefcl <stefatwork@gmail.com>
    Here's the code I'm currently using, it's based on the v3 with generics
    version I got from the repo a few days earlier :

    DataContext context = DataContext.createDataContext();

    SelectQuery query = new SelectQuery(ItemDetail.class);

    //query.andQualifier( ExpressionFactory.matchDbExp(
    ItemDetail.PROFILEID_PK_COLUMN, 10 ) );
    //query.andQualifier( ExpressionFactory.likeExp(
    ItemDetail.NAME_PROPERTY, "G%")) ;
    query.addOrdering(ItemDetail.NAME_PROPERTY, Ordering.ASC );

    query.setCursorFetchSize( 1000 );

    ResultIterator iterator = context.performIteratedQuery(query);

    while( iterator.hasNextRow())
    {
    ItemDetail detail = (ItemDetail)context.objectFromDataRow(
    ItemDetail.class, (DataRow)iterator.nextRow(), false);
    System.out.println( detail.getName() + "\t\t\t\t" +
    detail.getKeywords() );
    }

    It works...
    If I remove the fetchSize parameter, it fails with following stackTrace :

    Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at java.lang.Class.getDeclaredFields0(Native Method)
    at java.lang.Class.privateGetDeclaredFields(Class.java:2291)
    at java.lang.Class.getDeclaredField(Class.java:1880)
    at

    java.util.concurrent.atomic.AtomicReferenceFieldUpdater$AtomicReferenceFieldUpdaterImpl.<init>(AtomicReferenceFieldUpdater.java:181)
    at

    java.util.concurrent.atomic.AtomicReferenceFieldUpdater.newUpdater(AtomicReferenceFieldUpdater.java:65)
    at java.sql.SQLException.<clinit>(SQLException.java:353)
    at

    org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1325)
    at

    org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:194)
    at

    org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451)
    at

    org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350)
    at

    org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:254)
    at

    org.apache.cayenne.access.jdbc.SelectAction.performAction(SelectAction.java:78)
    at

    org.apache.cayenne.access.DataNodeQueryAction.runQuery(DataNodeQueryAction.java:87)
    at
    org.apache.cayenne.access.DataNode.performQueries(DataNode.java:274)
    at

    org.apache.cayenne.access.DataDomainLegacyQueryAction.execute(DataDomainLegacyQueryAction.java:79)
    at
    org.apache.cayenne.access.DataDomain$1.transform(DataDomain.java:713)
    at
    org.apache.cayenne.access.DataDomain.runInTransaction(DataDomain.java:833)
    at
    org.apache.cayenne.access.DataDomain.performQueries(DataDomain.java:710)
    at

    org.apache.cayenne.access.DataContext.internalPerformIteratedQuery(DataContext.java:1213)
    at

    org.apache.cayenne.access.DataContext.performIteratedQuery(DataContext.java:1179)
    at cayenne.CayenneMain.main(CayenneMain.java:43)


    I'm surprised that you get different results... I use the lastest 8.3.604
    driver from jdbc.
    Otherwise, looking at cayenne logs it seems that a transaction is being
    started :

    INFO: --- will run 1 query.
    25 mai 2009 11:58:35 org.apache.cayenne.access.QueryLogger logConnect
    INFO: Opening connection: jdbc:postgresql://localhost/prediggo_db
    Login: postgres
    Password: *******
    25 mai 2009 11:58:35 org.apache.cayenne.access.QueryLogger
    logConnectSuccess
    INFO: +++ Connecting: SUCCESS.
    25 mai 2009 11:58:35 org.apache.cayenne.access.QueryLogger
    logBeginTransaction
    INFO: --- transaction started.
    25 mai 2009 11:58:36 org.apache.cayenne.access.QueryLogger log
    INFO: Detected and installed adapter:
    org.apache.cayenne.dba.postgres.PostgresAdapter
    25 mai 2009 11:58:36 org.apache.cayenne.access.QueryLogger logQuery
    INFO: SELECT t0.timest, t0.keywords,


    If you want, I can test your implementation with my test case but I can't
    afford to spend much time setting up the build environment (jdk 1.5 + maven
    stuff)... If I could get compiled binaries somewhere...



    Andrey Razumovsky wrote:
    I'm using latest postgres (or mysql) driver. To make a test, I put 1M
    records in db table, then do "select * from that_table". No matter which
    fetch size I tried to set, at the moment of PreparedStastement.execute()
    something about 70-100Mb are allocated (I don't actually remember the
    quantity right now). So, if heap size is small, I get OutOfMemory.
    Otherwise, it's all ok and rows can be quickly iterated. Looking at code
    on
    postgres site, the only difference is that I didn't bother with
    autocommit
    mode. I'm not sure Cayenne does, too... Maybe this is the reason why I
    didn't get positive results.

    Andrey
    --
    View this message in context:
    http://www.nabble.com/Fwd%3A-Big-selects-on-PostGres-%3A-Configuring-Statement.setFetchSize%28%29-of--a-selectquery-tp23703746p23704583.html
    Sent from the Cayenne - Dev mailing list archive at Nabble.com.
  • Andrus Adamchik at May 25, 2009 at 10:28 am
    I should mention that Cayenne always sets autocommit to false. Not
    sure if that matters.

    Andrus

    On May 25, 2009, at 1:18 PM, Andrey Razumovsky wrote:

    Well, I tested only with pure JDBC, maybe Cayenne sets some enviroment
    correctly. It'd be great if you tested committed code. You can get
    compiled
    binaries from Hudson:
    http://hudson.zones.apache.org/hudson/job/Cayenne-trunk/403/

    2009/5/25 stefcl <stefatwork@gmail.com>
    Here's the code I'm currently using, it's based on the v3 with
    generics
    version I got from the repo a few days earlier :

    DataContext context = DataContext.createDataContext();

    SelectQuery query = new SelectQuery(ItemDetail.class);

    //query.andQualifier( ExpressionFactory.matchDbExp(
    ItemDetail.PROFILEID_PK_COLUMN, 10 ) );
    //query.andQualifier( ExpressionFactory.likeExp(
    ItemDetail.NAME_PROPERTY, "G%")) ;
    query.addOrdering(ItemDetail.NAME_PROPERTY, Ordering.ASC );

    query.setCursorFetchSize( 1000 );

    ResultIterator iterator = context.performIteratedQuery(query);

    while( iterator.hasNextRow())
    {
    ItemDetail detail = (ItemDetail)context.objectFromDataRow(
    ItemDetail.class, (DataRow)iterator.nextRow(), false);
    System.out.println( detail.getName() + "\t\t\t\t" +
    detail.getKeywords() );
    }

    It works...
    If I remove the fetchSize parameter, it fails with following
    stackTrace :

    Exception in thread "main" java.lang.OutOfMemoryError: Java heap
    space
    at java.lang.Class.getDeclaredFields0(Native Method)
    at java.lang.Class.privateGetDeclaredFields(Class.java:2291)
    at java.lang.Class.getDeclaredField(Class.java:1880)
    at

    java.util.concurrent.atomic.AtomicReferenceFieldUpdater
    $
    AtomicReferenceFieldUpdaterImpl
    .<init>(AtomicReferenceFieldUpdater.java:181)
    at

    java
    .util
    .concurrent
    .atomic
    .AtomicReferenceFieldUpdater
    .newUpdater(AtomicReferenceFieldUpdater.java:65)
    at java.sql.SQLException.<clinit>(SQLException.java:353)
    at

    org
    .postgresql
    .core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:
    1325)
    at

    org
    .postgresql
    .core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:194)
    at

    org
    .postgresql
    .jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:
    451)
    at

    org
    .postgresql
    .jdbc2
    .AbstractJdbc2Statement
    .executeWithFlags(AbstractJdbc2Statement.java:350)
    at

    org
    .postgresql
    .jdbc2
    .AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:254)
    at

    org
    .apache
    .cayenne.access.jdbc.SelectAction.performAction(SelectAction.java:78)
    at

    org
    .apache
    .cayenne
    .access.DataNodeQueryAction.runQuery(DataNodeQueryAction.java:87)
    at
    org.apache.cayenne.access.DataNode.performQueries(DataNode.java:274)
    at

    org
    .apache
    .cayenne
    .access
    .DataDomainLegacyQueryAction
    .execute(DataDomainLegacyQueryAction.java:79)
    at
    org.apache.cayenne.access.DataDomain$1.transform(DataDomain.java:713)
    at
    org
    .apache.cayenne.access.DataDomain.runInTransaction(DataDomain.java:
    833)
    at
    org.apache.cayenne.access.DataDomain.performQueries(DataDomain.java:
    710)
    at

    org
    .apache
    .cayenne
    .access.DataContext.internalPerformIteratedQuery(DataContext.java:
    1213)
    at

    org
    .apache
    .cayenne.access.DataContext.performIteratedQuery(DataContext.java:
    1179)
    at cayenne.CayenneMain.main(CayenneMain.java:43)


    I'm surprised that you get different results... I use the lastest
    8.3.604
    driver from jdbc.
    Otherwise, looking at cayenne logs it seems that a transaction is
    being
    started :

    INFO: --- will run 1 query.
    25 mai 2009 11:58:35 org.apache.cayenne.access.QueryLogger logConnect
    INFO: Opening connection: jdbc:postgresql://localhost/prediggo_db
    Login: postgres
    Password: *******
    25 mai 2009 11:58:35 org.apache.cayenne.access.QueryLogger
    logConnectSuccess
    INFO: +++ Connecting: SUCCESS.
    25 mai 2009 11:58:35 org.apache.cayenne.access.QueryLogger
    logBeginTransaction
    INFO: --- transaction started.
    25 mai 2009 11:58:36 org.apache.cayenne.access.QueryLogger log
    INFO: Detected and installed adapter:
    org.apache.cayenne.dba.postgres.PostgresAdapter
    25 mai 2009 11:58:36 org.apache.cayenne.access.QueryLogger logQuery
    INFO: SELECT t0.timest, t0.keywords,


    If you want, I can test your implementation with my test case but I
    can't
    afford to spend much time setting up the build environment (jdk 1.5
    + maven
    stuff)... If I could get compiled binaries somewhere...



    Andrey Razumovsky wrote:
    I'm using latest postgres (or mysql) driver. To make a test, I put
    1M
    records in db table, then do "select * from that_table". No matter
    which
    fetch size I tried to set, at the moment of
    PreparedStastement.execute()
    something about 70-100Mb are allocated (I don't actually remember
    the
    quantity right now). So, if heap size is small, I get OutOfMemory.
    Otherwise, it's all ok and rows can be quickly iterated. Looking
    at code
    on
    postgres site, the only difference is that I didn't bother with
    autocommit
    mode. I'm not sure Cayenne does, too... Maybe this is the reason
    why I
    didn't get positive results.

    Andrey
    --
    View this message in context:
    http://www.nabble.com/Fwd%3A-Big-selects-on-PostGres-%3A-Configuring-Statement.setFetchSize%28%29-of--a-selectquery-tp23703746p23704583.html
    Sent from the Cayenne - Dev mailing list archive at Nabble.com.
  • Stefcl at May 25, 2009 at 10:40 am
    According to the docs, and the problem encountered by Andrey in his jdbc
    code, it probably does...

    I have tested the 1.5 build from hudson, I get exactly the same results as
    when using my fix which is positive. I think I'm gonna stop using my own
    build.


    Andrus Adamchik wrote:
    I should mention that Cayenne always sets autocommit to false. Not
    sure if that matters.

    Andrus
    --
    View this message in context: http://www.nabble.com/Fwd%3A-Big-selects-on-PostGres-%3A-Configuring-Statement.setFetchSize%28%29-of--a-selectquery-tp23703746p23704902.html
    Sent from the Cayenne - Dev mailing list archive at Nabble.com.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupdev @
categoriescayenne
postedMay 21, '09 at 12:59p
activeMay 25, '09 at 10:40a
posts11
users3
websitecayenne.apache.org

People

Translate

site design / logo © 2021 Grokbase