im writing some code for a web service server that's using compressed
headers / input, so im often de-compressing small blocks of data (1-4kb).
Now i realized that the standard flate package is allocating a lot of
memory during reader initialization (so every time you decompress
anything), just by skipping this allocation and introducing some memory
pool to flate package i was able to increase the performance over 2 times.
For 100k iterations the run-time is down from 3.4s to 1.2s
So in Close, the pointers are put into the pool (there's a limit for pool
size too) and in NewReader the code is checking if we have something we can
use there. It preserves allocation of 34116 bytes per request.
I was thinking if it isn't better to add some "Reset" function for the
reader, but it'd require keeping a pool of decompressors (as keeping a
decompressor for every persistent connection would require me to keep
thousands of pre-allocated structures). Maybe adding some function like
SetMemoryPoolLimit which defaults to 1, so standard behaviour won't change?
Any chance of introducing this change (or such design) into the standard
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
For more options, visit https://groups.google.com/d/optout.