One of our systems overfed the rabbit, and it's now a bit angry. The
strange problem that we're having, though, is that it seems that the
commit after the thousandth ack'd message is triggering the persister
log rollover, which is then causing beam to run out of memory. The
only thing we're doing is basic_get'ing messages, acking them, and
then committing (the channel is transactional). We've also tried
committing every thousand messages; either way, the first commit after
the thousandth ack'd message is causing the persister log rollover,
followed immediately by beam crashing due to being out of RAM.
Does anybody know why rolling the persister log would cause the system
to run out of memory? It seems like a strange place to need to
allocate a lot of RAM, but I'm not at all familiar with how the
persister rollover works. I'm going to next try just doing basic_gets
without being in a transaction to see if that prevents the crash, but
I'd like a better understanding of how the log rollover works, and why
it seems to need a lot of memory to succeed.
Also, will the future (1.8?) persister still do the process described
in https://dev.rabbitmq.com/wiki/RabbitPersisterDesign ? Writing the
entire rabbit state to disk every so-often doesn't seem like it would
work terribly well when storing huge amounts of data, or am I
understanding how the persister log works?