and routing so as not to bias the discussion, we have an application
where we want this to happen:
1. A producer publishes a single message.
2. N copies of that message get distributed to N consumers.
3. These N consumers are selected from a total of M consumers, each of
which has declared which of the N sets of messages it is interested in
That's it. Basically, there are N "topics", and each message gets
published to all of those topics - but without the producer having to
do N publishes, and with only one consumable copy of each message per
Our initial design had a single exchange with N queues bound to it;
each of the M consumers then subscribes to one of the N queues.
But not reliable. The problem is that a queue lives on a single node,
and if that node goes down, the queue is gone. It's not the loss of
data from the queue - no big deal in our use case - but the fact that
the queue itself becomes unusable within the cluster, and that entire
set of consumers stop working.
We don't need durable queues or persistent messages and we certainly
don't want to get into Pacemaker; I don't care if some messages are
lost. I just want the overall operation to survive the loss of any
given node in the cluster. That makes me think that queues are the
wrong entity to use for the "topics".
We could do N exchanges, where each node has its own local queue bound
to one of them (and use exchange-to-exchange binding to set up a
single exchange for the producer to publish to), but then *all* the
nodes would get copies of all the messages sent to their "topic",
instead of just one.
So I'm kind of at a loss as to how best to fit our workflow into the
RabbitMQ/AMQP model in a moderately resilient fashion. Any
Mark J. Reed <markjreed at gmail.com>