Marek Majkowski wrote:
Putting a large file over RabbitMQ obviously will work, but I'd say
it's a bad idea. It's just easier to put the big things to some
temporary storage, and transfer only light requests over message bus.
Furthermore, until the new persister lands, sending *really* big messages will
cause problems with memory exhaustion.
HTTP has all sorts of neat machinery for reliably transferring very large
payloads a piece at a time, resuming aborted transfers, and so on. It even has
a verb, DELETE, that can be pressed into service as a poor-man's
acknowledge-and-release-message signal (which can work especially nicely on a
server file system supporting hard links: give every potential receiver a
distinct URL to retrieve and delete, and you get a kind of automatic garbage
collection). I'd go with HTTP for transferring large chunks of data, and the
messaging system for carrying the URLs.
The problems with that system I can think of: 1. when is it safe to stop
serving a piece of information from the HTTP server? and 2. how do you get the
chunk up to the HTTP server in the first place? Not every client can run an
httpd, and HTTP sadly (!) lacks resumable PUT/POST.
To solve 1: If you know you have a single receiver, use DELETE. If you know you
have exactly n receivers, hardlink the file on the server a few times for
URL-uniqueness, and use DELETE. If you don't know the set of receivers, you're
in trouble, and a timeout is likely your only sensible option.
To solve 2: Either run an httpd on clients wishing to send Very Large Files
(with associated firewall and configuration headaches), or use ReverseHTTP to
be httpd-like without the firewall and configuration hassle, or put a CGI
script or similar on the server that supports a protocol you can use to
reliably upload files in a resumable fashion for later download by other clients.
Or yeah, if it's not going to kill your server, just send the big files as
large messages. Resumability of partial transfers goes out the window, of course.