FAQ
Hi everyone,

I’ve been considering removing support for “Use Shared Cache” checkbox from the Modeler and for the corresponding code in the framework. This is about a strategy for a *snapshot* cache that is used to save a DB trip when resolving to-one relationships or checking a previously committed state of a modified object. The alternative (i.e. when it is unchecked, and a per-context cache is used) is not very useful IMO and the need to support both strategies results in lots of dirty code.

Now I am wondering have anyone ever unchecked that checkbox, and if so, what was the reason?

Also if you have no idea what I am talking about, it also answers my question :)

Thanks,
Andrus

Search Discussions

  • Aristedes Maniatis at Nov 4, 2013 at 7:16 am
    We experimented with that option and unticking it caused expected behaviour that was quite hard to understand. Particularly in how it inter-related to the object cache. Dima is the person who knows all the details of that experiment.

    I certainly remember thinking to myself "never uncheck that option".

    Ari


    On 4/11/2013 5:36pm, Andrus Adamchik wrote:
    Hi everyone,

    I’ve been considering removing support for “Use Shared Cache” checkbox from the Modeler and for the corresponding code in the framework. This is about a strategy for a *snapshot* cache that is used to save a DB trip when resolving to-one relationships or checking a previously committed state of a modified object. The alternative (i.e. when it is unchecked, and a per-context cache is used) is not very useful IMO and the need to support both strategies results in lots of dirty code.

    Now I am wondering have anyone ever unchecked that checkbox, and if so, what was the reason?

    Also if you have no idea what I am talking about, it also answers my question :)

    Thanks,
    Andrus
    --
    -------------------------->
    Aristedes Maniatis
    GPG fingerprint CBFB 84B4 738D 4E87 5E5C 5EFA EF6A 7D2E 3E49 102A
  • Mike Kienenberger at Nov 4, 2013 at 1:25 pm
    I was considering unchecking it to force every session of my web
    application to work with its own set of records from the database.
    On Mon, Nov 4, 2013 at 2:16 AM, Aristedes Maniatis wrote:
    We experimented with that option and unticking it caused expected behaviour that was quite hard to understand. Particularly in how it inter-related to the object cache. Dima is the person who knows all the details of that experiment.

    I certainly remember thinking to myself "never uncheck that option".

    Ari


    On 4/11/2013 5:36pm, Andrus Adamchik wrote:
    Hi everyone,

    I’ve been considering removing support for “Use Shared Cache” checkbox from the Modeler and for the corresponding code in the framework. This is about a strategy for a *snapshot* cache that is used to save a DB trip when resolving to-one relationships or checking a previously committed state of a modified object. The alternative (i.e. when it is unchecked, and a per-context cache is used) is not very useful IMO and the need to support both strategies results in lots of dirty code.

    Now I am wondering have anyone ever unchecked that checkbox, and if so, what was the reason?

    Also if you have no idea what I am talking about, it also answers my question :)

    Thanks,
    Andrus
    --
    -------------------------->
    Aristedes Maniatis
    GPG fingerprint CBFB 84B4 738D 4E87 5E5C 5EFA EF6A 7D2E 3E49 102A
  • Andrus Adamchik at Nov 4, 2013 at 1:30 pm
    Wonder if there’s any benefit here vs. just starting separate ServerRuntimes per session?
    On Nov 4, 2013, at 4:24 PM, Mike Kienenberger wrote:

    I was considering unchecking it to force every session of my web
    application to work with its own set of records from the database.
    On Mon, Nov 4, 2013 at 2:16 AM, Aristedes Maniatis wrote:
    We experimented with that option and unticking it caused expected behaviour that was quite hard to understand. Particularly in how it inter-related to the object cache. Dima is the person who knows all the details of that experiment.

    I certainly remember thinking to myself "never uncheck that option".

    Ari


    On 4/11/2013 5:36pm, Andrus Adamchik wrote:
    Hi everyone,

    I’ve been considering removing support for “Use Shared Cache” checkbox from the Modeler and for the corresponding code in the framework. This is about a strategy for a *snapshot* cache that is used to save a DB trip when resolving to-one relationships or checking a previously committed state of a modified object. The alternative (i.e. when it is unchecked, and a per-context cache is used) is not very useful IMO and the need to support both strategies results in lots of dirty code.

    Now I am wondering have anyone ever unchecked that checkbox, and if so, what was the reason?

    Also if you have no idea what I am talking about, it also answers my question :)

    Thanks,
    Andrus
    --
    -------------------------->
    Aristedes Maniatis
    GPG fingerprint CBFB 84B4 738D 4E87 5E5C 5EFA EF6A 7D2E 3E49 102A
  • Mike Kienenberger at Nov 4, 2013 at 1:41 pm
    Having separate ServerRuntimes would require separate connections to
    the database, correct? If so, that would not scale well.

    I'm guessing it would also use quite a bit more memory if each session
    had its own ServerRuntime, depending on the size of your data model.

    On Mon, Nov 4, 2013 at 8:29 AM, Andrus Adamchik wrote:
    Wonder if there’s any benefit here vs. just starting separate ServerRuntimes per session?
    On Nov 4, 2013, at 4:24 PM, Mike Kienenberger wrote:

    I was considering unchecking it to force every session of my web
    application to work with its own set of records from the database.
    On Mon, Nov 4, 2013 at 2:16 AM, Aristedes Maniatis wrote:
    We experimented with that option and unticking it caused expected behaviour that was quite hard to understand. Particularly in how it inter-related to the object cache. Dima is the person who knows all the details of that experiment.

    I certainly remember thinking to myself "never uncheck that option".

    Ari


    On 4/11/2013 5:36pm, Andrus Adamchik wrote:
    Hi everyone,

    I’ve been considering removing support for “Use Shared Cache” checkbox from the Modeler and for the corresponding code in the framework. This is about a strategy for a *snapshot* cache that is used to save a DB trip when resolving to-one relationships or checking a previously committed state of a modified object. The alternative (i.e. when it is unchecked, and a per-context cache is used) is not very useful IMO and the need to support both strategies results in lots of dirty code.

    Now I am wondering have anyone ever unchecked that checkbox, and if so, what was the reason?

    Also if you have no idea what I am talking about, it also answers my question :)

    Thanks,
    Andrus
    --
    -------------------------->
    Aristedes Maniatis
    GPG fingerprint CBFB 84B4 738D 4E87 5E5C 5EFA EF6A 7D2E 3E49 102A
  • Andrus Adamchik at Nov 4, 2013 at 1:54 pm

    Having separate ServerRuntimes would require separate connections to
    the database, correct? If so, that would not scale well.
    No. All of them can reuse a single shared DataSource.
    I'm guessing it would also use quite a bit more memory if each session
    had its own ServerRuntime, depending on the size of your data model.
    “Use shared cache” creates the most of the memory overhead. Memory overhead of multiple ServerRuntimes (compared with unchecking "use shared cache”) is only in keeping clones of various service singletons (factories, etc.). I should probably try it out in profiler and see what the exact value is, but my wild guess is < 1MB per runtime.

    Andrus
    On Nov 4, 2013, at 4:40 PM, Mike Kienenberger wrote:

    Having separate ServerRuntimes would require separate connections to
    the database, correct? If so, that would not scale well.

    I'm guessing it would also use quite a bit more memory if each session
    had its own ServerRuntime, depending on the size of your data model.

    On Mon, Nov 4, 2013 at 8:29 AM, Andrus Adamchik wrote:
    Wonder if there’s any benefit here vs. just starting separate ServerRuntimes per session?
    On Nov 4, 2013, at 4:24 PM, Mike Kienenberger wrote:

    I was considering unchecking it to force every session of my web
    application to work with its own set of records from the database.
    On Mon, Nov 4, 2013 at 2:16 AM, Aristedes Maniatis wrote:
    We experimented with that option and unticking it caused expected behaviour that was quite hard to understand. Particularly in how it inter-related to the object cache. Dima is the person who knows all the details of that experiment.

    I certainly remember thinking to myself "never uncheck that option".

    Ari


    On 4/11/2013 5:36pm, Andrus Adamchik wrote:
    Hi everyone,

    I’ve been considering removing support for “Use Shared Cache” checkbox from the Modeler and for the corresponding code in the framework. This is about a strategy for a *snapshot* cache that is used to save a DB trip when resolving to-one relationships or checking a previously committed state of a modified object. The alternative (i.e. when it is unchecked, and a per-context cache is used) is not very useful IMO and the need to support both strategies results in lots of dirty code.

    Now I am wondering have anyone ever unchecked that checkbox, and if so, what was the reason?

    Also if you have no idea what I am talking about, it also answers my question :)

    Thanks,
    Andrus
    --
    -------------------------->
    Aristedes Maniatis
    GPG fingerprint CBFB 84B4 738D 4E87 5E5C 5EFA EF6A 7D2E 3E49 102A
  • Mike Kienenberger at Nov 4, 2013 at 2:13 pm

    On Mon, Nov 4, 2013 at 8:45 AM, Andrus Adamchik wrote:
    Having separate ServerRuntimes would require separate connections to
    the database, correct? If so, that would not scale well.
    No. All of them can reuse a single shared DataSource.
    I will have to look into that. I thought when I was doing my testing
    of changing the qualifiers with a separate ServerRuntime that it used
    a separate database connection. Maybe it's just not configured to
    share the datasource by default.

    I'm guessing it would also use quite a bit more memory if each session
    had its own ServerRuntime, depending on the size of your data model.
    “Use shared cache” creates the most of the memory overhead. Memory overhead of multiple ServerRuntimes (compared with unchecking "use shared cache”) is only in keeping clones of various service singletons (factories, etc.). I should probably try it out in profiler and see what the exact value is, but my wild guess is < 1MB per runtime.
    You mean 'Unchecking “Use shared cache” creates the most of the memory
    overhead', right?

    I started to comment on this in my first response, but decided it
    didn't matter. There's little point in comparing it against the
    other memory since that memory use is going to be the same whether
    it's one or multiple ServerRuntimes, and will depend on the
    application. In my use case, the amount of database information
    pulled in is pretty small per session most of the time. Maybe one
    table row from five-to-ten tables and a few table rows from a couple
    of other tables. Less often a session might pull in a great deal
    more data from a lot more tables.

    But if it's < 1Mb per runtime, then it's unlikely it will matter.
    Since my largest xml data map file is a 256K and contains 75 entities,
    I assumed that each runtime would also have to load a copy of that
    data into memory.
  • Mike Kienenberger at Nov 4, 2013 at 2:23 pm
    Just so it's clear, I'm not opposed to removing non-shared-cache as an
    option. I just wanted to let you know why I was considering using it
    since you asked.

    On Mon, Nov 4, 2013 at 9:10 AM, Mike Kienenberger wrote:
    On Mon, Nov 4, 2013 at 8:45 AM, Andrus Adamchik wrote:
    Having separate ServerRuntimes would require separate connections to
    the database, correct? If so, that would not scale well.
    No. All of them can reuse a single shared DataSource.
    I will have to look into that. I thought when I was doing my testing
    of changing the qualifiers with a separate ServerRuntime that it used
    a separate database connection. Maybe it's just not configured to
    share the datasource by default.

    I'm guessing it would also use quite a bit more memory if each session
    had its own ServerRuntime, depending on the size of your data model.
    “Use shared cache” creates the most of the memory overhead. Memory overhead of multiple ServerRuntimes (compared with unchecking "use shared cache”) is only in keeping clones of various service singletons (factories, etc.). I should probably try it out in profiler and see what the exact value is, but my wild guess is < 1MB per runtime.
    You mean 'Unchecking “Use shared cache” creates the most of the memory
    overhead', right?

    I started to comment on this in my first response, but decided it
    didn't matter. There's little point in comparing it against the
    other memory since that memory use is going to be the same whether
    it's one or multiple ServerRuntimes, and will depend on the
    application. In my use case, the amount of database information
    pulled in is pretty small per session most of the time. Maybe one
    table row from five-to-ten tables and a few table rows from a couple
    of other tables. Less often a session might pull in a great deal
    more data from a lot more tables.

    But if it's < 1Mb per runtime, then it's unlikely it will matter.
    Since my largest xml data map file is a 256K and contains 75 entities,
    I assumed that each runtime would also have to load a copy of that
    data into memory.
  • Andrus Adamchik at Nov 4, 2013 at 2:35 pm

    Just so it's clear, I'm not opposed to removing non-shared-cache as an
    option. I just wanted to let you know why I was considering using it
    since you asked.
    Of course. Understood. And I appreciate this discussion.
    I will have to look into that. I thought when I was doing my testing
    of changing the qualifiers with a separate ServerRuntime that it used
    a separate database connection. Maybe it's just not configured to
    share the datasource by default.
    Yes, this depends on configuration of your stack. JNDIDataSourceFactory will result in a shared connection pool (provided by container). XMLPoolingDataSourceFactory will not.
    But if it's < 1Mb per runtime, then it's unlikely it will matter.
    Since my largest xml data map file is a 256K and contains 75 entities,
    I assumed that each runtime would also have to load a copy of that
    data into memory.

    Sorry, I missed that part. Copies of DataMap (or more precisely - EntityResolver) will use some memory. Though EntityResolver can be loaded once and shared between runtimes using a custom DataDomainProvider or something.

    Andrus

    On Nov 4, 2013, at 5:22 PM, Mike Kienenberger wrote:
    Just so it's clear, I'm not opposed to removing non-shared-cache as an
    option. I just wanted to let you know why I was considering using it
    since you asked.

    On Mon, Nov 4, 2013 at 9:10 AM, Mike Kienenberger wrote:
    On Mon, Nov 4, 2013 at 8:45 AM, Andrus Adamchik wrote:
    Having separate ServerRuntimes would require separate connections to
    the database, correct? If so, that would not scale well.
    No. All of them can reuse a single shared DataSource.
    I will have to look into that. I thought when I was doing my testing
    of changing the qualifiers with a separate ServerRuntime that it used
    a separate database connection. Maybe it's just not configured to
    share the datasource by default.

    I'm guessing it would also use quite a bit more memory if each session
    had its own ServerRuntime, depending on the size of your data model.
    “Use shared cache” creates the most of the memory overhead. Memory overhead of multiple ServerRuntimes (compared with unchecking "use shared cache”) is only in keeping clones of various service singletons (factories, etc.). I should probably try it out in profiler and see what the exact value is, but my wild guess is < 1MB per runtime.
    You mean 'Unchecking “Use shared cache” creates the most of the memory
    overhead', right?

    I started to comment on this in my first response, but decided it
    didn't matter. There's little point in comparing it against the
    other memory since that memory use is going to be the same whether
    it's one or multiple ServerRuntimes, and will depend on the
    application. In my use case, the amount of database information
    pulled in is pretty small per session most of the time. Maybe one
    table row from five-to-ten tables and a few table rows from a couple
    of other tables. Less often a session might pull in a great deal
    more data from a lot more tables.

    But if it's < 1Mb per runtime, then it's unlikely it will matter.
    Since my largest xml data map file is a 256K and contains 75 entities,
    I assumed that each runtime would also have to load a copy of that
    data into memory.
  • Mike Kienenberger at Nov 4, 2013 at 2:57 pm

    On Mon, Nov 4, 2013 at 9:32 AM, Andrus Adamchik wrote:
    Yes, this depends on configuration of your stack. JNDIDataSourceFactory will result in a shared connection pool (provided by container). XMLPoolingDataSourceFactory will not.
    I was using XMLPoolingDataSourceFactory, which explains the behavior I saw.
  • Mike Kienenberger at Nov 4, 2013 at 4:32 pm

    On Mon, Nov 4, 2013 at 2:16 AM, Aristedes Maniatis wrote:
    We experimented with that option and unticking it caused expected behaviour
    that was quite hard to understand. Particularly in how it inter-related to the object
    cache. Dima is the person who knows all the details of that experiment.
    I think the fix to CAY-1880 might be all that's needed to make this
    usable. The issue would certainly explain the unusual behavior you
    were seeing.
  • Mike Kienenberger at Nov 4, 2013 at 4:57 pm
    This patch also fixed issues where I was having to manually force an
    entity to be reloaded from the database to reflect changes. I was
    able to comment out changes I'd made just to get my project working
    under 3.1, and all of project tests now pass.

    Even if we decide to drop "use shared cache" at some point, I think
    it's worthwhile to commit this patch for 3.1, and possibly 3.0. Any
    reason why I should not do so? I'd like to also write a Cayenne test
    showing the issue, but I probably won't get to that until late this
    week.

    On Mon, Nov 4, 2013 at 11:30 AM, Mike Kienenberger wrote:
    On Mon, Nov 4, 2013 at 2:16 AM, Aristedes Maniatis wrote:
    We experimented with that option and unticking it caused expected behaviour
    that was quite hard to understand. Particularly in how it inter-related to the object
    cache. Dima is the person who knows all the details of that experiment.
    I think the fix to CAY-1880 might be all that's needed to make this
    usable. The issue would certainly explain the unusual behavior you
    were seeing.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupuser @
categoriescayenne
postedNov 4, '13 at 6:36a
activeNov 4, '13 at 4:57p
posts12
users3
websitecayenne.apache.org

People

Translate

site design / logo © 2022 Grokbase