FAQ
Hi,



I have 2 brokers clustered with each other. They are load balanced by
having a HA proxy before them.

Let us call the brokers as B1 and B2.



The HAproxy is in the same machine as B1.



The producer P1 and consumer C1 are on different machines.



The model we are using is amq.direct and all the messages are non
persistent. All our messages are mandatory=true and immediate=false.



Initially we start the broker B1 alone. The broker B2 is down.

We start P1 and C1. They are now connected to the broker B1.

We start C1 and then P1. after 3-4 seconds, we start B2.

After B2 starts, We immediately crash B1.

Now both P1 and C1 get connected to B2 and the test continues till we
stop all three.



Now, during the failover time we find message losses.



Is it expected or can we make sure that no message gets lost.





Also, when will be channel.setReturnListener called ?

Will it be called when the message does not reach the queue from the
exchange or when the message does reach the consumer from the queue ?





In the clustered mode, exchanges are global across rabbitmq nodes .. are
the queues also global ?

Please clear us. We are confused about the global nature of queues !!









Thanks and Regards,

D.Radhakrishnan
Trainee Engineer-Architecture

IVY Comptech Private Limited
Cyber Spazio,Road No 2, Banjara Hills,
Hyderabad-500034, Andhra Pradesh.

Phone + 91 (40) 66721000 - 4638
Mobile + 91 (0) 9030842104

This email and any attachments are confidential, legally privileged and
protected by copyright. If you are not the intended recipient,
dissemination or copying this email is prohibited. If you have received
this in error, please notify the sender by replying by email and then
delete the email completely from your system.

Any views or opinions are solely those of the sender. This
communication is not intended to form a binding contract unless
expressly indicated to the contrary and properly authorized. Any actions
taken on the basis of this email are at recipient's own risk.




This email is sent for and on behalf of Ivy Comptech Private Limited. Ivy Comptech Private Limited is a limited liability company.

This email and any attachments are confidential, and may be legally privileged and protected by copyright. If you are not the intended recipient dissemination or copying of this email is prohibited. If you have received this in error, please notify the sender by replying by email and then delete the email completely from your system.
Any views or opinions are solely those of the sender. This communication is not intended to form a binding contract on behalf of Ivy Comptech Private Limited unless expressly indicated to the contrary and properly authorised. Any actions taken on the basis of this email are at the recipient's own risk.

Registered office:
Ivy Comptech Private Limited, Cyber Spazio, Road No. 2, Banjara Hills, Hyderabad 500 033, Andhra Pradesh, India. Registered number: 37994. Registered in India. A list of members' names is available for inspection at the registered office.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20100521/e9843123/attachment.htm

Search Discussions

  • Matthew Sackman at May 21, 2010 at 2:50 pm

    On Fri, May 21, 2010 at 06:55:57PM +0530, Radha Krishnan D wrote:
    I have 2 brokers clustered with each other. They are load balanced by
    having a HA proxy before them.
    Ok, that's not a usual setup. If you actually have the two rabbit nodes
    clustered together then you should have no need for the load balancer -
    all clients can connect to either node and still reach every resource in
    the broker.
    Now both P1 and C1 get connected to B2 and the test continues till we
    stop all three.

    Now, during the failover time we find message losses.

    Is it expected or can we make sure that no message gets lost.
    That is expected. Queues are located on the node of the connection which
    created the queue. When the node goes down, you'll lose that queue. When
    your clients (re)connect to the broker and reach the other node, they
    are recreating the queues and other resources, but are creating fresh,
    empty queues at that point.

    If you need to be able to withstand node failures then currently your
    best bet is to use the Pacemaker Active/Passive guide at:
    http://www.rabbitmq.com/pacemaker.html
    You will need to publish all messages persistent to durable queues. You
    should not need the load balancer and pacemaker will correctly ensure
    that the two nodes are not both up at the same time.
    Also, when will be channel.setReturnListener called ?
    Well, you set it yourself. The ReturnListener is invoked as soon as any
    Basic.Return method is read off the socket.
    Will it be called when the message does not reach the queue from the
    exchange or when the message does reach the consumer from the queue ?
    This is the difference between immediate and mandatory. Mandatory says
    "blow up if the msg doesn't make it to a queue". Immediate says "blow up
    if the msg doesn't make it through a queue to a consumer".
    In the clustered mode, exchanges are global across rabbitmq nodes .. are
    the queues also global ?
    They are visible globally within the cluster, but they are located in a
    node.

    Matthew
  • Jason J. W. Williams at May 21, 2010 at 4:52 pm
    Great use case for an auto-replay option when dead nodes return. ;)

    -J
    On Fri, May 21, 2010 at 8:50 AM, Matthew Sackman wrote:
    On Fri, May 21, 2010 at 06:55:57PM +0530, Radha Krishnan D wrote:
    I have 2 brokers clustered with each other. ?They are load balanced by
    having a HA proxy before them.
    Ok, that's not a usual setup. If you actually have the two rabbit nodes
    clustered together then you should have no need for the load balancer -
    all clients can connect to either node and still reach every resource in
    the broker.
    Now both P1 and C1 get connected to B2 and the test continues till we
    stop all three.

    Now, during the failover time we find message losses.

    Is it expected ?or can we make sure that no message gets lost.
    That is expected. Queues are located on the node of the connection which
    created the queue. When the node goes down, you'll lose that queue. When
    your clients (re)connect to the broker and reach the other node, they
    are recreating the queues and other resources, but are creating fresh,
    empty queues at that point.

    If you need to be able to withstand node failures then currently your
    best bet is to use the Pacemaker Active/Passive guide at:
    http://www.rabbitmq.com/pacemaker.html
    You will need to publish all messages persistent to durable queues. You
    should not need the load balancer and pacemaker will correctly ensure
    that the two nodes are not both up at the same time.
    Also, when will be ?channel.setReturnListener ?called ?
    Well, you set it yourself. The ReturnListener is invoked as soon as any
    Basic.Return method is read off the socket.
    Will it be called when the message does not reach the queue from the
    exchange or when the message does reach the consumer from the queue ?
    This is the difference between immediate and mandatory. Mandatory says
    "blow up if the msg doesn't make it to a queue". Immediate says "blow up
    if the msg doesn't make it through a queue to a consumer".
    In the clustered mode, exchanges are global across rabbitmq nodes .. are
    the queues also global ?
    They are visible globally within the cluster, but they are located in a
    node.

    Matthew

    _______________________________________________
    rabbitmq-discuss mailing list
    rabbitmq-discuss at lists.rabbitmq.com
    http://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouprabbitmq-discuss @
categoriesrabbitmq
postedMay 21, '10 at 1:25p
activeMay 21, '10 at 4:52p
posts3
users3
websiterabbitmq.com
irc#rabbitmq

People

Translate

site design / logo © 2017 Grokbase