I would like to receive intact WebSocket frames with controlled buffer
sizes (to prevent excessive memory usage caused by bad peers). The
websocket.Message codec doesn't seem to allow that, but according to the
websocket.Conn.Read function documentation I could use it directly to read
the whole frame to my pre-allocated array if I provide one large enough.
(Even then I can't always tell if the whole frame was read, but I could
live with providing an array of maxsize+1 bytes and disconnecting if I read
over maxsize bytes.)
The problem is that Conn.Read doesn't work as advertised; the documentation
says: "if msg is not large enough for the frame data, it fills the msg and
next Read will read the rest of the frame data." When reading frames of
over 4088 bytes into larger buffers, it consistently returns data in
4088-byte chunks. The implementation relies on io.LimitedReader to fill
the array in one go, which it apparently doesn't do. This behavior of
course conforms to the io.Reader interface, and that might be what the
implementer had in mind, but then the documentation should be fixed.
But changing the documentation wouldn't help me, because if I have to call
Conn.Read in a loop to get the whole frame, I wouldn't know where the frame
boundaries are. I see two alternatives which would make me happy:
1. If the Conn.Read implementation is considered buggy, apply the patch
I've attached. (I'll go through the official review channel if this is
indeed the real bug that needs to be fixed.)
2. Provide an API for asking the size of the frame before receiving it.
(This would be great regardless of alternative 1.)