Just to chime in late here, this is a highly annoying behaviour. Either the
gob Encoder should pass an error when encoding something larger than 1GB,
or the Decoder should proceed. I would prefer to raise the limit; is there
a practical reason for the 1GB size?

I also am wrestling with pre-processing large swaths of data, and want
multiple cores to talk to that data, making redis a bad plan. Also, redis
copies data in ram and saves to disk as part of its persistence strategy.

It can take a long time for a single thread to write large amounts of data
while it takes up double the RAM it should. Redis is great, but not that
suitable for storing a very large map. Add in how slow string processing
can be in golang, and you don't have a great solution.

On Thursday, August 21, 2014 at 12:30:20 AM UTC-7, Taru Karttunen wrote:
On 21.08 05:10, Frank Schröder wrote:
If you just have a single map why not use redis or memcache to store the


e.g. we had one internal app that analyzed large amounts of data
(typically 5-500gb) and produced an output file of 500-2000mb
containing essentially a huge map.

Managing databases for this with many datasets and ad hoc use would be
tedious, as typically we just want "dump this for later loading".

- Taru Karttunen
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

Discussion Posts


Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 9 of 11 | next ›
Discussion Overview
groupgolang-nuts @
postedApr 30, '14 at 6:44p
activeFeb 4, '15 at 11:26p



site design / logo © 2021 Grokbase