In a clustered scenario (3 brokers) with mirrored queues, what is the
correct way to determine which node is the "master" via inspection?

On approach would be to look at the list of queues and see which node is
hosting the queue (as opposed to slave_nodes). However, I'm unsure if
there's some scenario where queue 'A' is hosted by one broker, while queue
'B' is hosted by another.

Another approach is to go off of whichever node has the statistics database,
but I'm unsure if there's a 1:1 correlation there.

If I were to use some combination of rabbitmqctl or rabbitmqadmin commands,
what heuristic should I use?

Background info, in case it matters:

We're using keepalived to maintain a virtual IP that all clients connect to.
We'd obviously like keepalived to put the VIP on whatever node is the master
rather than on a slave node. Our understanding is that if the master has
messages that aren't in the slave node, a client that connects via the VIP
to the slave node and tries to read messages won't see them as they're only
on the master.

Thanks,

Matt


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20120221/d2ce196b/attachment.htm>

Search Discussions

  • Simon MacMullen at Feb 22, 2012 at 9:58 am

    On 21/02/12 19:04, Matt Pietrek wrote:
    In a clustered scenario (3 brokers) with mirrored queues, what is the
    correct way to determine which node is the "master" via inspection?
    There isn't one. As in, there isn't really an overall master node.
    On approach would be to look at the list of queues and see which node is
    hosting the queue (as opposed to slave_nodes). However, I'm unsure if
    there's some scenario where queue 'A' is hosted by one broker, while
    queue 'B' is hosted by another.
    Absolutely. The original master node *for a given queue* is the one that
    the client that first declared the queue was connected to. But then they
    can fail over to other nodes pretty arbitrarily.
    Another approach is to go off of whichever node has the statistics
    database, but I'm unsure if there's a 1:1 correlation there.
    Again, this is the first node with mgmt to come up, or some other node
    at random in the case of failover.
    Background info, in case it matters:

    We're using keepalived to maintain a virtual IP that all clients connect
    to. We'd obviously like keepalived to put the VIP on whatever node is
    the master rather than on a slave node. Our understanding is that if the
    master has messages that aren't in the slave node, a client that
    connects via the VIP to the slave node and tries to read messages won't
    see them as they're only on the master.
    No, that's not correct. Messages are *always* read from the master node
    (for a given queue).

    Cheers, Simon

    --
    Simon MacMullen
    RabbitMQ, VMware
  • Matt Pietrek at Feb 22, 2012 at 6:39 pm
    Messages are *always* read from the master node (for a given queue).
    This is very helpful. Going a step further, is it true to say that in a
    cluster where we mirror all queues, there's really no benefit to assigning
    the VIP to one node over another?

    Matt

    On Wed, Feb 22, 2012 at 1:58 AM, Simon MacMullen wrote:
    On 21/02/12 19:04, Matt Pietrek wrote:

    In a clustered scenario (3 brokers) with mirrored queues, what is the
    correct way to determine which node is the "master" via inspection?
    There isn't one. As in, there isn't really an overall master node.


    On approach would be to look at the list of queues and see which node is
    hosting the queue (as opposed to slave_nodes). However, I'm unsure if
    there's some scenario where queue 'A' is hosted by one broker, while
    queue 'B' is hosted by another.
    Absolutely. The original master node *for a given queue* is the one that
    the client that first declared the queue was connected to. But then they
    can fail over to other nodes pretty arbitrarily.


    Another approach is to go off of whichever node has the statistics
    database, but I'm unsure if there's a 1:1 correlation there.
    Again, this is the first node with mgmt to come up, or some other node at
    random in the case of failover.


    Background info, in case it matters:
    We're using keepalived to maintain a virtual IP that all clients connect
    to. We'd obviously like keepalived to put the VIP on whatever node is
    the master rather than on a slave node. Our understanding is that if the
    master has messages that aren't in the slave node, a client that
    connects via the VIP to the slave node and tries to read messages won't
    see them as they're only on the master.
    No, that's not correct. Messages are *always* read from the master node
    (for a given queue).

    Cheers, Simon

    --
    Simon MacMullen
    RabbitMQ, VMware
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20120222/8fda9491/attachment.htm>
  • Matthew Sackman at Feb 22, 2012 at 8:10 pm

    On Wed, Feb 22, 2012 at 10:39:04AM -0800, Matt Pietrek wrote:
    Messages are *always* read from the master node (for a given queue).
    This is very helpful. Going a step further, is it true to say that in a
    cluster where we mirror all queues, there's really no benefit to assigning
    the VIP to one node over another?
    Correct, though for performance reasons you might like to ensure the
    masters are roughly uniformly spread over all the nodes.

    Matthew
  • Aminer at Feb 5, 2013 at 8:11 pm
    I'm faced with much the same issue Matt P. mentions, but I wanted to make
    sure I understand how this works. Is the following correct?


    The system is set up to have a set of three hosts each running a single
    RabbitMQ node. A load-balancer routes traffic evenly between all the
    nodes, and has a health-check to determine whether each node is active and
    healthy. The entire cluster is served by a single external IP address
    (which points to the load balancer).


    Within the cluster, let's say I have a topic exchange, and a number of
    clients who each subscribe to a specific topic (several clients to a topic)
    by creating a permanent, named queue for themselves and binding it to the
    exchange. The messages come from another server whose sole function is to
    generate these messages and publish them to the exchange. Let's assume
    everyone connects successfully and a while passes during which everyone
    gets their messages as expected.


    Then, one node fails. The load balancer detects this and stop routing new
    connections to that machine. All clients connected to that node lose their
    connections and re-connect (through the load balancer) to one of the
    functioning nodes. They re-subscribe to their named queues, and find all
    the messages they missed while they were re-connecting are waiting for them
    (assuming the mirroring was up-to-date when the one node failed), and
    continue to receive all new messages posted to their topics.


    The server publishing the messages opens and closes its connection each
    time it needs to publish a message, so it notices nothing. The load
    balancer automatically routes it to a functioning node, and publishes to
    the exchange which automatically routes the message to the queue no matter
    what node is current the master node for that queue.


    Am I missing anything here? Thanks very much for any help!



    On Wednesday, February 22, 2012 12:10:05 PM UTC-8, Matthew Sackman wrote:
    On Wed, Feb 22, 2012 at 10:39:04AM -0800, Matt Pietrek wrote:
    Messages are *always* read from the master node (for a given queue).
    This is very helpful. Going a step further, is it true to say that in a
    cluster where we mirror all queues, there's really no benefit to assigning
    the VIP to one node over another?
    Correct, though for performance reasons you might like to ensure the
    masters are roughly uniformly spread over all the nodes.

    Matthew
    _______________________________________________
    rabbitmq-discuss mailing list
    rabbitmq... at lists.rabbitmq.com <javascript:>
    https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130205/d7dc9fad/attachment.htm>
  • Emile Joubert at Feb 6, 2013 at 9:32 am
    Hi,

    On 05/02/13 20:11, aminer at groupon.com wrote:
    All clients connected to that node lose their connections and
    re-connect

    In the scenario you describe this should work fine, because the node
    that a client connects to will be the same as the master node for the
    private queue. In general it is possible to connect to one node, while
    consuming from a queue which has its master on a different node. In that
    case consumers must be prepared to receive notification that the master
    is now on a different node (receive basic.cancel) and resubscribe to the
    queue.










    -Emile

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouprabbitmq-discuss @
categoriesrabbitmq
postedFeb 21, '12 at 7:04p
activeFeb 6, '13 at 9:32a
posts6
users5
websiterabbitmq.com
irc#rabbitmq

People

Translate

site design / logo © 2017 Grokbase