I am using rabbitmq server 2.6.1, pika 0.9.5, python 2.6.6 and ubuntu
10.10.

I have multiple queues for multiple consumer (many to many), each with
a prefetch count of 5.

I've encountered a rare issue whereby a consumer that is currently
processing some messages (and has yet to ack them) suddenly
terminates. Obviously, this part is my problem (it's often the OS
killing the process due to consuming too much memory).

However, the effects are less than ideal. The messages remain unack'd
in the queue (verified by the Management Plugin), even though the
connection is no longer there (unless for some reason the OS is
deciding to keep the socket open?). This leads to that work never
being re-delegated.

Moreover, a few times when I've restarted the consumer, it's still
unable to consume those messages and starves itself because it thinks
they're still being worked on (and because of the prefetch count of 5
only allowing 5 unack'd messages), even though they are not being
worked on. This usually doesn't happen, but if it does I restart it
again and it finally works.

Any ideas as to what's happening and if there are any workarounds?

Thanks,
Aaron

Search Discussions

  • Gavin M. Roy at Sep 29, 2011 at 7:17 pm
    Are you issuing a basic.recover?

    http://www.rabbitmq.com/amqp-0-9-1-quickref.html#basic.recover
    On Thu, Sep 29, 2011 at 3:05 PM, Aaron Voelker wrote:

    I am using rabbitmq server 2.6.1, pika 0.9.5, python 2.6.6 and ubuntu
    10.10.

    I have multiple queues for multiple consumer (many to many), each with
    a prefetch count of 5.

    I've encountered a rare issue whereby a consumer that is currently
    processing some messages (and has yet to ack them) suddenly
    terminates. Obviously, this part is my problem (it's often the OS
    killing the process due to consuming too much memory).

    However, the effects are less than ideal. The messages remain unack'd
    in the queue (verified by the Management Plugin), even though the
    connection is no longer there (unless for some reason the OS is
    deciding to keep the socket open?). This leads to that work never
    being re-delegated.

    Moreover, a few times when I've restarted the consumer, it's still
    unable to consume those messages and starves itself because it thinks
    they're still being worked on (and because of the prefetch count of 5
    only allowing 5 unack'd messages), even though they are not being
    worked on. This usually doesn't happen, but if it does I restart it
    again and it finally works.

    Any ideas as to what's happening and if there are any workarounds?

    Thanks,
    Aaron
    _______________________________________________
    rabbitmq-discuss mailing list
    rabbitmq-discuss at lists.rabbitmq.com
    https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20110929/12c344be/attachment.htm>

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouprabbitmq-discuss @
categoriesrabbitmq
postedSep 29, '11 at 7:05p
activeSep 29, '11 at 7:17p
posts2
users2
websiterabbitmq.com
irc#rabbitmq

2 users in discussion

Aaron Voelker: 1 post Gavin M. Roy: 1 post

People

Translate

site design / logo © 2022 Grokbase