I'm still running into the same problem.
I have 2 tomcat + 2 memcached setup as in the documentation.
I'm experimenting with non-sticky sessions.
BOTH context.xml have the same configuration:


<Manager
className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="n1:store2-1.nexstra.com:
11211,n2:store2-2.nexstra.com:11211"
requestUriIgnorePattern=".*/heartbeat|.*\.(ico|png|gif|jpg|css|js)
$"

transcoderFactoryClass="de.javakaffee.web.msm.JavaSerializationTranscoderFactory"
sessionBackupAsync="false"
sticky="false"
/>



Scenerio.


Login and click around ... I see requests goign back and forth to both
TC's.

Stop memcached service on T1 ...
Click around, I see some network connect errors but failover works
fine on both TC's.

Restart memcached on T1
click around a bit.

Stop memcached on T2
Click ... jumps to login page which means its lost sessions.


So I'm wondering. Are sessions always stored in all memcached servers
in the list ?
IN a failover case what propogates the session BACK to the original
memcached so if it comes back online then the 2nd one goes down the
session is still there ?


I can produce a test and logs if you like.
Thanks for any insight.

-David

Search Discussions

  • Martin Grotzke at Oct 19, 2011 at 11:26 pm
    Hi David,

    can you tell what the sessionId was/is looking like on the relevant steps?

    Cheers,
    Martin

    On 10/20/2011 12:41 AM, DALDEI wrote:
    I'm still running into the same problem.
    I have 2 tomcat + 2 memcached setup as in the documentation.
    I'm experimenting with non-sticky sessions.
    BOTH context.xml have the same configuration:


    <Manager
    className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
    memcachedNodes="n1:store2-1.nexstra.com:
    11211,n2:store2-2.nexstra.com:11211"
    requestUriIgnorePattern=".*/heartbeat|.*\.(ico|png|gif|jpg|css|js)
    $"

    transcoderFactoryClass="de.javakaffee.web.msm.JavaSerializationTranscoderFactory"
    sessionBackupAsync="false"
    sticky="false"
    />



    Scenerio.


    Login and click around ... I see requests goign back and forth to both
    TC's.

    Stop memcached service on T1 ...
    Click around, I see some network connect errors but failover works
    fine on both TC's.

    Restart memcached on T1
    click around a bit.

    Stop memcached on T2
    Click ... jumps to login page which means its lost sessions.


    So I'm wondering. Are sessions always stored in all memcached servers
    in the list ?
    IN a failover case what propogates the session BACK to the original
    memcached so if it comes back online then the 2nd one goes down the
    session is still there ?


    I can produce a test and logs if you like.
    Thanks for any insight.

    -David
  • DALDEI at Oct 19, 2011 at 11:55 pm
    I tried again and this time couldnt reproduce the problem.
    I'll try again tomorrow.

    Could you tell me, when saving the session is the session saved to all
    or just one of the memcached servers ?
    If only one what event determines to save to another server ?

    Thanks.



    On Oct 19, 7:25 pm, Martin Grotzke wrote:
    Hi David,

    can you tell what the sessionId was/is looking like on the relevant steps?

    Cheers,
    Martin

    On 10/20/2011 12:41 AM, DALDEI wrote:






    I'm still running into the same problem.
    I have 2 tomcat + 2 memcached setup as in the documentation.
    I'm experimenting with non-sticky sessions.
    BOTH context.xml have the same configuration:
    <Manager
    className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
    memcachedNodes="n1:store2-1.nexstra.com:
    11211,n2:store2-2.nexstra.com:11211"
    requestUriIgnorePattern=".*/heartbeat|.*\.(ico|png|gif|jpg|css|js)
    $"
    transcoderFactoryClass="de.javakaffee.web.msm.JavaSerializationTranscoderFa ctory"
    sessionBackupAsync="false"
    sticky="false"
    />
    Scenerio.
    Login and click around ... I see requests goign back and forth to both
    TC's.
    Stop memcached service on T1  ...
    Click around, I see some network connect errors but failover works
    fine on both TC's.
    Restart memcached on T1
    click around a bit.
    Stop memcached on T2
    Click ... jumps to login page which means its lost sessions.
    So I'm wondering.  Are sessions always stored in all memcached servers
    in the list ?
    IN a failover case what propogates the session BACK to the original
    memcached so if it comes back online then the 2nd one goes down the
    session is still there ?
    I can produce a test and logs if you like.
    Thanks for any insight.
    -David


    signature.asc
    < 1KViewDownload
  • Martin Grotzke at Oct 20, 2011 at 7:21 am

    On 10/20/2011 01:55 AM, DALDEI wrote:
    I tried again and this time couldnt reproduce the problem.
    I'll try again tomorrow.
    Ok.

    Could you tell me, when saving the session is the session saved to all
    or just one of the memcached servers ?
    For non-sticky sessions one of the nodes is randomly elected as primary
    node (the nodeId is encoded in the sessionId). The logical "next" node
    is chosen as backup node. The session is stored in the primary node and
    additionally in the backup node (under a key "bak:<sessionId>").
    On further requests the session is updated in both primary and backup
    nodes, if it wasn't modified the session is only "pinged" in memcached
    (to prevent expiration).

    a) Primary node fails: When the primary node is not available for a
    request (so that the session cannot be loaded from the primary node), it
    will be pulled from the backup node and the backup node will become the
    primary node. When the backup for the session shall be saved, the backup
    node will be the next node relative to the new primary one. As an
    example, with nodes n1, n2, n3: first n2 would be primary and n3 backup
    node. When n2 fails n3 will become the primary and n1 will be used as
    backup node.
    In your case with 2 nodes there will be no backup node available, so
    that the backup will be skipped. As soon as the other node is available
    again, it will be used as backup node.

    b) Backup node fails: When the backup node fails the backup will be skipped.

    This is the basic and hopefully mostly complete algorithm regarding how
    non-sticky sessions are handled right now.

    Cheers,
    Martin
  • DALDEI at Oct 20, 2011 at 10:48 am
    Thank you, I this is explaining my (occasional) problem.


    In your case with 2 nodes there will be no backup node available, so
    that the backup will be skipped. As soon as the other node is available
    again, it will be used as backup node.
    So What mechanism is used to determine if the 'backup' node is
    available?
    In my case with 2 nodes there was only a short period where I turned
    on the backup node
    before turning off the primary. Maybe this wasnt detected in time ?

    In the case that I didnt have a problem maybe I waited a bit longer ?

    How is recovery of the backup node detected ? I didnt see constant
    network errors after I took it down,
    so I am presuming its not polled every event. Is there a timer or
    background thread ? Or number of requests ?

    Thanks
    =David
  • Martin Grotzke at Oct 20, 2011 at 11:54 am

    On 10/20/2011 12:48 PM, DALDEI wrote:
    Thank you, I this is explaining my (occasional) problem.
    In your case with 2 nodes there will be no backup node available, so
    that the backup will be skipped. As soon as the other node is available
    again, it will be used as backup node.
    So What mechanism is used to determine if the 'backup' node is
    available?
    msm uses spymemcached as memcached client, thus spymemcached has to
    reconnect.
    In my case with 2 nodes there was only a short period where I turned
    on the backup node
    before turning off the primary. Maybe this wasnt detected in time ?
    Perhaps this was the case. spymemcached logs connection errors an also
    logs reconnection (spymemcached classes are in package
    net.spy.memcached, you can search the logs for this).

    In the case that I didnt have a problem maybe I waited a bit longer ?
    Perhaps this was the case.

    How is recovery of the backup node detected ? I didnt see constant
    network errors after I took it down,
    so I am presuming its not polled every event.
    spymemached checks the connection regularly, not per client event.

    Is there a timer or
    background thread ? Or number of requests ?
    It must be the former case.

    Cheers,
    Martin

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupmemcached-session-manager @
categoriesmemcached
postedOct 19, '11 at 10:42p
activeOct 20, '11 at 11:54a
posts6
users2
websitememcached.org

2 users in discussion

DALDEI: 3 posts Martin Grotzke: 3 posts

People

Translate

site design / logo © 2022 Grokbase