FAQ
Hi Guys,
im writing some code for a web service server that's using compressed
headers / input, so im often de-compressing small blocks of data (1-4kb).

Now i realized that the standard flate package is allocating a lot of
memory during reader initialization (so every time you decompress
anything), just by skipping this allocation and introducing some memory
pool to flate package i was able to increase the performance over 2 times.

Code:
http://pastebin.com/cNPcwQq3

For 100k iterations the run-time is down from 3.4s to 1.2s

So in Close, the pointers are put into the pool (there's a limit for pool
size too) and in NewReader the code is checking if we have something we can
use there. It preserves allocation of 34116 bytes per request.

I was thinking if it isn't better to add some "Reset" function for the
reader, but it'd require keeping a pool of decompressors (as keeping a
decompressor for every persistent connection would require me to keep
thousands of pre-allocated structures). Maybe adding some function like
SetMemoryPoolLimit which defaults to 1, so standard behaviour won't change?

Any chance of introducing this change (or such design) into the standard
package?

Thanks.

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

  • Dan Kortschak at Sep 19, 2014 at 11:00 am
    This is an issue for go1.4

    http://golang.org/issue/7950
    http://golang.org/issue/7836
    On Fri, 2014-09-19 at 01:19 -0700, Slawomir Pryczek wrote:
    Hi Guys,
    im writing some code for a web service server that's using compressed
    headers / input, so im often de-compressing small blocks of data (1-4kb).

    Now i realized that the standard flate package is allocating a lot of
    memory during reader initialization (so every time you decompress
    anything), just by skipping this allocation and introducing some memory
    pool to flate package i was able to increase the performance over 2 times.

    Code:
    http://pastebin.com/cNPcwQq3

    For 100k iterations the run-time is down from 3.4s to 1.2s

    So in Close, the pointers are put into the pool (there's a limit for pool
    size too) and in NewReader the code is checking if we have something we can
    use there. It preserves allocation of 34116 bytes per request.

    I was thinking if it isn't better to add some "Reset" function for the
    reader, but it'd require keeping a pool of decompressors (as keeping a
    decompressor for every persistent connection would require me to keep
    thousands of pre-allocated structures). Maybe adding some function like
    SetMemoryPoolLimit which defaults to 1, so standard behaviour won't change?

    Any chance of introducing this change (or such design) into the standard
    package?

    Thanks.
    --
    Omnes mundum facimus.

    Dan Kortschak <dan.kortschak@adelaide.edu.au>
    F9B3 3810 C4DD E214 347C B8DA D879 B7A7 EECC 5A40
    10C7 EEF4 A467 89C9 CA00 70DF C18F 3421 A744 607C

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedSep 19, '14 at 8:19a
activeSep 19, '14 at 11:00a
posts2
users2
websitegolang.org

2 users in discussion

Slawomir Pryczek: 1 post Dan Kortschak: 1 post

People

Translate

site design / logo © 2021 Grokbase