Hi there,

I'm new to the concepts of AMQP and RabbitMQ and was wondering if someone
may be able to point me in the right direction or offer some advice.

I have got a direct exchange feeding a durable queue, if I load say 50,000
messages onto the queue and then start a consumer to work through those
item's.

My consumer creates the connection, then creates the model (I'm using .NET)
and sets the Qos prefetch count to 1, then calls BasicGet, processes the
message, sends an ack (BasicAck), then enumerates until BasicGet returns
null. Finally disposing the model and closing the connection.

If I restart the host I have the RabbitMQ broker running on (Windows) after
processing and ack'ing 40,000 messages while my consumer is working through
the remaining items, when the RabbitMQ broker comes back online the 40,000
messages are redelivered (even though they were acknowledged).

Is this supposed to happen? I would anticipate a number of them being
requeued if the broker was shutdown abruptly (i.e. loss of power) as it may
not have been written to disk at that stage, but a graceful restart I
thought it may flush any uncommitted changes to disk.

If this is by design, any suggestions or advice on how I can minimise the
number of redelivered messages if the broker dies/restarts?

Thanks in advance,

Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20111111/7022b742/attachment.htm>

Search Discussions

  • Emile Joubert at Nov 11, 2011 at 4:10 pm
    Hi Ben,
    On 11/11/11 15:12, Ben Lewis wrote:
    I have got a direct exchange feeding a durable queue, if I load say
    50,000 messages onto the queue and then start a consumer to work through
    those item's.

    My consumer creates the connection, then creates the model (I'm using
    .NET) and sets the Qos prefetch count to 1, then calls BasicGet,
    processes the message, sends an ack (BasicAck), then enumerates until
    BasicGet returns null. Finally disposing the model and closing the
    connection.
    You don't need to set QoS if you retrieve messages synchronously
    (BasicGet). QoS makes sense with BasicConsume. That is not the cause of
    your trouble though.
    If I restart the host I have the RabbitMQ broker running on (Windows)
    after processing and ack'ing 40,000 messages while my consumer is
    working through the remaining items, when the RabbitMQ broker comes back
    online the 40,000 messages are redelivered (even though they were
    acknowledged).

    Is this supposed to happen?
    No. Acknowledged messages are forgotten about by the broker, so you
    should not be seeing them again. What is the output of

    rabbitmqctl list_queues name messages_ready messages_unacknowledged

    before and after the restart? Is it possible that you are somehow not
    acknowledging the messages? Do you get the same result if you use the
    noAck flag of BasicGet?

    If you are using transactions and failing to commit the transaction then
    you could see apparently acknowledged message reappearing - is that a
    possibility?
    If this is by design, any suggestions or advice on how I can minimise
    the number of redelivered messages if the broker dies/restarts?
    If you don't care about persisting messages then you can publish the
    messages in non-persistent mode (set the delivery mode in the basic
    properties to 1).


    -Emile
  • Ben Lewis at Nov 11, 2011 at 5:44 pm
    Hi Emile,

    Thanks for the response. Here's the output with from the rabbitmqctl
    command at different stages:

    First, with me ack'ing the messages:

    Prior to starting consumer
    ------------------------
    Listing queues ...
    Test.ItemProcess 67645 0
    ...done.

    As consumer is consuming, just reboot rebooting broker
    ------------------------
    Listing queues ...
    Test.ItemProcess 42936 1
    ...done.

    After broker rebooted and RabbitMQ started
    ------------------------
    Listing queues ...
    Test.ItemProcess 67645 0
    ...done.

    With noack=true the messages are delivered to the client quicker than I can
    reboot the broker, but when it comes back up the queue is empty with 0
    unacknowledged messages.

    I'm not using transactions (I put a seperate console app together to test
    this) and I do want to persist and acknowledge the message. I've stepped
    through my code in debug and its hitting the BasicAck method.

    Kind Regards,

    Ben
    On Fri, Nov 11, 2011 at 4:10 PM, Emile Joubert wrote:

    Hi Ben,
    On 11/11/11 15:12, Ben Lewis wrote:
    I have got a direct exchange feeding a durable queue, if I load say
    50,000 messages onto the queue and then start a consumer to work through
    those item's.

    My consumer creates the connection, then creates the model (I'm using
    .NET) and sets the Qos prefetch count to 1, then calls BasicGet,
    processes the message, sends an ack (BasicAck), then enumerates until
    BasicGet returns null. Finally disposing the model and closing the
    connection.
    You don't need to set QoS if you retrieve messages synchronously
    (BasicGet). QoS makes sense with BasicConsume. That is not the cause of
    your trouble though.
    If I restart the host I have the RabbitMQ broker running on (Windows)
    after processing and ack'ing 40,000 messages while my consumer is
    working through the remaining items, when the RabbitMQ broker comes back
    online the 40,000 messages are redelivered (even though they were
    acknowledged).

    Is this supposed to happen?
    No. Acknowledged messages are forgotten about by the broker, so you
    should not be seeing them again. What is the output of

    rabbitmqctl list_queues name messages_ready messages_unacknowledged

    before and after the restart? Is it possible that you are somehow not
    acknowledging the messages? Do you get the same result if you use the
    noAck flag of BasicGet?

    If you are using transactions and failing to commit the transaction then
    you could see apparently acknowledged message reappearing - is that a
    possibility?
    If this is by design, any suggestions or advice on how I can minimise
    the number of redelivered messages if the broker dies/restarts?
    If you don't care about persisting messages then you can publish the
    messages in non-persistent mode (set the delivery mode in the basic
    properties to 1).


    -Emile
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20111111/1e42d996/attachment.htm>
  • Simone Busoli at Nov 11, 2011 at 9:33 pm
    Ben, are you supplying the correct deliveryTag to the BasicAck?
    On Nov 11, 2011 6:44 PM, "Ben Lewis" wrote:

    Hi Emile,

    Thanks for the response. Here's the output with from the rabbitmqctl
    command at different stages:

    First, with me ack'ing the messages:

    Prior to starting consumer
    ------------------------
    Listing queues ...
    Test.ItemProcess 67645 0
    ...done.

    As consumer is consuming, just reboot rebooting broker
    ------------------------
    Listing queues ...
    Test.ItemProcess 42936 1
    ...done.

    After broker rebooted and RabbitMQ started
    ------------------------
    Listing queues ...
    Test.ItemProcess 67645 0
    ...done.

    With noack=true the messages are delivered to the client quicker than I
    can reboot the broker, but when it comes back up the queue is empty with 0
    unacknowledged messages.

    I'm not using transactions (I put a seperate console app together to test
    this) and I do want to persist and acknowledge the message. I've stepped
    through my code in debug and its hitting the BasicAck method.

    Kind Regards,

    Ben
    On Fri, Nov 11, 2011 at 4:10 PM, Emile Joubert wrote:

    Hi Ben,
    On 11/11/11 15:12, Ben Lewis wrote:
    I have got a direct exchange feeding a durable queue, if I load say
    50,000 messages onto the queue and then start a consumer to work through
    those item's.

    My consumer creates the connection, then creates the model (I'm using
    .NET) and sets the Qos prefetch count to 1, then calls BasicGet,
    processes the message, sends an ack (BasicAck), then enumerates until
    BasicGet returns null. Finally disposing the model and closing the
    connection.
    You don't need to set QoS if you retrieve messages synchronously
    (BasicGet). QoS makes sense with BasicConsume. That is not the cause of
    your trouble though.
    If I restart the host I have the RabbitMQ broker running on (Windows)
    after processing and ack'ing 40,000 messages while my consumer is
    working through the remaining items, when the RabbitMQ broker comes back
    online the 40,000 messages are redelivered (even though they were
    acknowledged).

    Is this supposed to happen?
    No. Acknowledged messages are forgotten about by the broker, so you
    should not be seeing them again. What is the output of

    rabbitmqctl list_queues name messages_ready messages_unacknowledged

    before and after the restart? Is it possible that you are somehow not
    acknowledging the messages? Do you get the same result if you use the
    noAck flag of BasicGet?

    If you are using transactions and failing to commit the transaction then
    you could see apparently acknowledged message reappearing - is that a
    possibility?
    If this is by design, any suggestions or advice on how I can minimise
    the number of redelivered messages if the broker dies/restarts?
    If you don't care about persisting messages then you can publish the
    messages in non-persistent mode (set the delivery mode in the basic
    properties to 1).


    -Emile

    _______________________________________________
    rabbitmq-discuss mailing list
    rabbitmq-discuss at lists.rabbitmq.com
    https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20111111/3a5f63c1/attachment.htm>
  • Ben Lewis at Nov 14, 2011 at 10:28 am
    -- Ben, are you supplying the correct deliveryTag to the BasicAck?

    Yes, I'm passing into BasicAck the DeliveryTag property on the
    BasicDeliverEventArgs class that is returned from the Dequeue method.

    Kind Regards,

    Ben
    On Fri, Nov 11, 2011 at 9:33 PM, Simone Busoli wrote:

    Ben, are you supplying the correct deliveryTag to the BasicAck?
    On Nov 11, 2011 6:44 PM, "Ben Lewis" wrote:

    Hi Emile,

    Thanks for the response. Here's the output with from the rabbitmqctl
    command at different stages:

    First, with me ack'ing the messages:

    Prior to starting consumer
    ------------------------
    Listing queues ...
    Test.ItemProcess 67645 0
    ...done.

    As consumer is consuming, just reboot rebooting broker
    ------------------------
    Listing queues ...
    Test.ItemProcess 42936 1
    ...done.

    After broker rebooted and RabbitMQ started
    ------------------------
    Listing queues ...
    Test.ItemProcess 67645 0
    ...done.

    With noack=true the messages are delivered to the client quicker than I
    can reboot the broker, but when it comes back up the queue is empty with 0
    unacknowledged messages.

    I'm not using transactions (I put a seperate console app together to test
    this) and I do want to persist and acknowledge the message. I've stepped
    through my code in debug and its hitting the BasicAck method.

    Kind Regards,

    Ben
    On Fri, Nov 11, 2011 at 4:10 PM, Emile Joubert wrote:

    Hi Ben,
    On 11/11/11 15:12, Ben Lewis wrote:
    I have got a direct exchange feeding a durable queue, if I load say
    50,000 messages onto the queue and then start a consumer to work through
    those item's.

    My consumer creates the connection, then creates the model (I'm using
    .NET) and sets the Qos prefetch count to 1, then calls BasicGet,
    processes the message, sends an ack (BasicAck), then enumerates until
    BasicGet returns null. Finally disposing the model and closing the
    connection.
    You don't need to set QoS if you retrieve messages synchronously
    (BasicGet). QoS makes sense with BasicConsume. That is not the cause of
    your trouble though.
    If I restart the host I have the RabbitMQ broker running on (Windows)
    after processing and ack'ing 40,000 messages while my consumer is
    working through the remaining items, when the RabbitMQ broker comes back
    online the 40,000 messages are redelivered (even though they were
    acknowledged).

    Is this supposed to happen?
    No. Acknowledged messages are forgotten about by the broker, so you
    should not be seeing them again. What is the output of

    rabbitmqctl list_queues name messages_ready messages_unacknowledged

    before and after the restart? Is it possible that you are somehow not
    acknowledging the messages? Do you get the same result if you use the
    noAck flag of BasicGet?

    If you are using transactions and failing to commit the transaction then
    you could see apparently acknowledged message reappearing - is that a
    possibility?
    If this is by design, any suggestions or advice on how I can minimise
    the number of redelivered messages if the broker dies/restarts?
    If you don't care about persisting messages then you can publish the
    messages in non-persistent mode (set the delivery mode in the basic
    properties to 1).


    -Emile

    _______________________________________________
    rabbitmq-discuss mailing list
    rabbitmq-discuss at lists.rabbitmq.com
    https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20111114/38f62e16/attachment.htm>
  • Ben Lewis at Nov 14, 2011 at 10:46 am
    I've done some more testing and knocked up two sample apps:

    Publisher (loads up 100,000 items)
    http://pastebin.com/E0ixXP8c

    Consumer
    http://pastebin.com/yAMcHkbu

    If I run the publisher, then once that has enqueued 100,000 items run the
    consumer. If after it has consumed 20,000 and still consuming, I do a
    graceful start of the RabbitMQ broker's OS (running on Windows Server 2008,
    so Start, Shutdown/Restart), when the broker comes back online I have
    100,000 items in the queue (even though approx 20,000 are consumed and
    acknowledged).

    I built a test host running Ubuntu which seems to handle the graceful OS
    restarts and will report the queue size (approx 80,000).

    If I do a dirty shutdown (remove power, forcefully stop VM, etc) neither
    Windows or Ubuntu have persisted to disc but I assume that may be related
    to it not being committed to disk at that stage.

    Kind Regards,

    Ben
    On Fri, Nov 11, 2011 at 5:44 PM, Ben Lewis wrote:

    Hi Emile,

    Thanks for the response. Here's the output with from the rabbitmqctl
    command at different stages:

    First, with me ack'ing the messages:

    Prior to starting consumer
    ------------------------
    Listing queues ...
    Test.ItemProcess 67645 0
    ...done.

    As consumer is consuming, just reboot rebooting broker
    ------------------------
    Listing queues ...
    Test.ItemProcess 42936 1
    ...done.

    After broker rebooted and RabbitMQ started
    ------------------------
    Listing queues ...
    Test.ItemProcess 67645 0
    ...done.

    With noack=true the messages are delivered to the client quicker than I
    can reboot the broker, but when it comes back up the queue is empty with 0
    unacknowledged messages.

    I'm not using transactions (I put a seperate console app together to test
    this) and I do want to persist and acknowledge the message. I've stepped
    through my code in debug and its hitting the BasicAck method.

    Kind Regards,

    Ben
    On Fri, Nov 11, 2011 at 4:10 PM, Emile Joubert wrote:

    Hi Ben,
    On 11/11/11 15:12, Ben Lewis wrote:
    I have got a direct exchange feeding a durable queue, if I load say
    50,000 messages onto the queue and then start a consumer to work through
    those item's.

    My consumer creates the connection, then creates the model (I'm using
    .NET) and sets the Qos prefetch count to 1, then calls BasicGet,
    processes the message, sends an ack (BasicAck), then enumerates until
    BasicGet returns null. Finally disposing the model and closing the
    connection.
    You don't need to set QoS if you retrieve messages synchronously
    (BasicGet). QoS makes sense with BasicConsume. That is not the cause of
    your trouble though.
    If I restart the host I have the RabbitMQ broker running on (Windows)
    after processing and ack'ing 40,000 messages while my consumer is
    working through the remaining items, when the RabbitMQ broker comes back
    online the 40,000 messages are redelivered (even though they were
    acknowledged).

    Is this supposed to happen?
    No. Acknowledged messages are forgotten about by the broker, so you
    should not be seeing them again. What is the output of

    rabbitmqctl list_queues name messages_ready messages_unacknowledged

    before and after the restart? Is it possible that you are somehow not
    acknowledging the messages? Do you get the same result if you use the
    noAck flag of BasicGet?

    If you are using transactions and failing to commit the transaction then
    you could see apparently acknowledged message reappearing - is that a
    possibility?
    If this is by design, any suggestions or advice on how I can minimise
    the number of redelivered messages if the broker dies/restarts?
    If you don't care about persisting messages then you can publish the
    messages in non-persistent mode (set the delivery mode in the basic
    properties to 1).


    -Emile
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20111114/f09bf1b7/attachment.htm>
  • Emile Joubert at Nov 14, 2011 at 1:01 pm
    Hi Ben,
    On 14/11/11 10:46, Ben Lewis wrote:
    If I run the publisher, then once that has enqueued 100,000 items run
    the consumer. If after it has consumed 20,000 and still consuming, I do
    a graceful start of the RabbitMQ broker's OS (running on Windows Server
    2008, so Start, Shutdown/Restart), when the broker comes back online I
    have 100,000 items in the queue (even though approx 20,000 are consumed
    and acknowledged).
    I'm only able to reproduce your result if I shut down the broker
    forcefully. Stopping the the broker by using the service control manager
    or using the rabbitmq-service.bat script does stop the broker
    gracefully, allowing all acknowledgements to be written to disk. I can
    only assume that hitting the Shutdown/Restart button on Windows does not
    first stop all the running services.

    If the broker is not stopped gracefully then clients must be prepared to
    received duplicate messages for which acknowledgements were sent, but
    not yet written to disk by the broker. That is the case regardless of
    OS. That may be just a few messages, or tens of thousands.


    -Emile

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouprabbitmq-discuss @
categoriesrabbitmq
postedNov 11, '11 at 3:12p
activeNov 14, '11 at 1:01p
posts7
users3
websiterabbitmq.com
irc#rabbitmq

People

Translate

site design / logo © 2022 Grokbase