Not sure if anyone has seen this joy before (1.7.1):

Port server moxi on node 'ns_1@rdu-membase-14.xxx.com' exited with
status 139. Restarting. Messages: 2011-10-24 11:54:39:
(cproxy_config.c.317) env: MOXI_SASL_PLAIN_USR (5)
2011-10-24 11:54:39: (cproxy_config.c.326) env: MOXI_SASL_PLAIN_PWD
(8)

I saw a few notes on this exit status on couchbase, but honestly
couldn't tell whether it was supposedly fixed or not. We're re-
installing Memcached today to get back to a working cache baseline for
Q4.

Search Discussions

  • Perry Krug at Oct 25, 2011 at 3:41 am
    FYI, there is no interraction with Moxi if you're using the Enyim client.

    Perry Krug
    perrykrug@gmail.com


    On Mon, Oct 24, 2011 at 5:16 PM, bradrover wrote:

    Not sure if anyone has seen this joy before (1.7.1):

    Port server moxi on node 'ns_1@rdu-membase-14.xxx.com' exited with
    status 139. Restarting. Messages: 2011-10-24 11:54:39:
    (cproxy_config.c.317) env: MOXI_SASL_PLAIN_USR (5)
    2011-10-24 11:54:39: (cproxy_config.c.326) env: MOXI_SASL_PLAIN_PWD
    (8)

    I saw a few notes on this exit status on couchbase, but honestly
    couldn't tell whether it was supposedly fixed or not. We're re-
    installing Memcached today to get back to a working cache baseline for
    Q4.
  • Matt Ingenthron at Oct 25, 2011 at 5:00 am
    Hi Brad,

    Just a couple quick points...

    If you're using moxi from Enyim, you probably have it misconfigured. Are
    you sure you have moxi in the path?

    Also, the thing you're seeing there is clearly moxi exiting unexpectedly,
    but it's also getting restarted by the Membase cluster manager. In the
    worst case scenario, this would cause things to have to reconnect for a
    moment. It would only affect the clients attached to that moxi. There
    would be no data loss (though there can be errors from things in flight).

    Was a core file generated? 139 is a segmentation fault, so there should
    be one that might give us more info on what happened. The info there
    doesn't say why. Was there anything in the log just above it.

    I can tell you that some moxi issues were fixed in 1.7.2. Bugs happen,
    and we fix all of the ones we know about.

    Thanks,

    Matt

    On 10/24/11 2:16 PM, "bradrover" wrote:

    Not sure if anyone has seen this joy before (1.7.1):

    Port server moxi on node 'ns_1@rdu-membase-14.xxx.com' exited with
    status 139. Restarting. Messages: 2011-10-24 11:54:39:
    (cproxy_config.c.317) env: MOXI_SASL_PLAIN_USR (5)
    2011-10-24 11:54:39: (cproxy_config.c.326) env: MOXI_SASL_PLAIN_PWD
    (8)

    I saw a few notes on this exit status on couchbase, but honestly
    couldn't tell whether it was supposedly fixed or not. We're re-
    installing Memcached today to get back to a working cache baseline for
    Q4.

    --
    Matt Ingenthron - Director, Developer Solutions
    Couchbase, Inc.
  • Bradrover at Oct 25, 2011 at 1:56 pm
    The Enyim client is the only cllent hitting Membase, so somehow moxy is
    being used Perry, possibly because we are using a memcached type client ?.
    Something went wrong when we switched the Enyim client from membase to
    memcached type. The bucket is memcached type as well. I logged over a
    million errors in 2 hours, and these moxi process failures started happening
    at the same time server side.

    If you follow my trail on this group you can see we reverted the client to
    type 1 because of unexplained 401 errors coming back from the pool config
    url when Enyim tries to reconfigure the socket pool. This seems due to bad
    basic auth header being sent by the client, even though we never configured
    any credentials to send. That along with socket timeouts (despite 20 second
    timeout limits) and other stuff we just never could explain. I've worked
    hard to make this fault tolerant, adding the throttling policy client side
    to not shut down the pool when occasional errors happen. So far, it's been a
    nightmare to be honest. So many issues showed up only under load, which is
    very hard to test reliably and prepare for. We've had a lot of application
    errors and fallout from this.

    Just trying to give you some feedback you can use, and help others trying to
    get this going. I don't have time to mess with it anymore. Thanks for all
    the help.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupenyim-memcached @
categoriesmemcached
postedOct 24, '11 at 9:16p
activeOct 25, '11 at 1:56p
posts4
users3
websitememcached.org

People

Translate

site design / logo © 2021 Grokbase