FAQ
I built against tip tonight and just as a preliminary result, I see much
better performance all around. The app is handling more requests per second
and memory usage isn't spiraling out of control. There might be some other
factors at play here giving me bad results for the long test, but the short
ones I've run make me happy. Ill post here if I have anything further to
add after additional testing.

Thanks all
On Tue, Aug 28, 2012 at 7:31 PM, Dave Cheney wrote:

Any updates ?
On Mon, Aug 27, 2012 at 11:54 PM, Daniel Skinner wrote:
interesting! I'll check that out when I have a moment

On Mon, Aug 27, 2012 at 8:47 AM, Dave Cheney wrote:

Try searching the mailing life for references to GOGC. The default value
is 100 which means the next gc due to the current heap filling up will
occur
at +100% of the current value. if you are looking in the code, it is
gcpercent in runtime/mgc0.c

Values smaller than 100 may slow the growth of the heap.

Cheers

Dave

On 27/08/2012, at 23:32, Patrick Mylund Nielsen
wrote:
FWIW I've had a similar problem that I've never been able to solve--I
would read ~20KB/1MB byte arrays for every request, and then throw
them away. Even though each request didn't leave any stray goroutines
or anything like that, memory usage would just keep increasing (beyond
10GB used, over a span of hours.) First I just attributed it to the GC
not being compacting, but there's probably something else going on.
Whether it's in net/http or my app, I can't tell, but your problem
sure sounds familiar.
On Mon, Aug 27, 2012 at 3:26 PM, Daniel Skinner wrote:
exactly, in my case I'm delivering twitter bootstrap resources and
have
a
custom template solution that integrates with html/template, all
being
done
in go. Best case I've seen so far was around 697 req/s on the
cheapest
linode server which is phenomenal compared to what I'm coming from
(python).
I just can't sustain that amount of traffic for too long b/c of when
the
garbage being generated is getting collected, and once the app starts
swapping the req/s fall to around 50.

The memory profile is pointing at the size of each connection and the
memory
is recovered eventually so that's great for bursts of traffic. I'm
going to
dig in to the internals regardless and see what I can do to sustain
as
high
a rate as possible. But this is something I'm doing on the side so
I'll
have
to dig into it again a bit later.


On Mon, Aug 27, 2012 at 8:17 AM, Patrick Mylund Nielsen
wrote:
Not for an application that does a large amount of processing for
each
request. Keep in mind that basically every framework for every
high-level language, e.g. Python, Ruby, benches far below 600 req/s
even with their "hello world" pages, with "normal" settings.

You'll easily get 20k-40k req/s with an app that doesn't do a lot of
computation, or which caches the results, with net/http and
GOMAXPROXS
1.
On Mon, Aug 27, 2012 at 11:50 AM, tomwilde <
sedevelopers01@gmail.com>
wrote:
Is it just me or does anyone else find 600 req/s an extremely poor
performance?

@Anton: Is your application also affected by threading and blocking
syscalls?

Am Montag, 27. August 2012 05:16:43 UTC+2 schrieb Anton Ageev:
Hello Daniel.

I run in a production https://github.com/antage/cdnstats for a
weeks
without restarts and any memory leaks. It processes 600-800 req/s.

My advices:
1) If you want to know a memory consumption then look at RSS
column
in
`top` or `ps` command. `Free memory` is very ambiguous parameter
on
Linux.
2) Check you use golang >= 1.0.2.
3) Check you don't reference any fields in http.Request. While you
hold
any reference (slice, string, pointers, etc) to http.Request data,
it
can't
be freed by GC. Use builtin.copy if you want to store some data
from
a
request.
On Sunday, August 26, 2012 7:32:33 AM UTC+4, Daniel Skinner wrote:

(first off, i have no experience with the volume of traffic i
mention)

I have a web application I'm working on and I did some
rudimentary
tests
with httperf to check performance. Making 600 requests a second,
I
was
watching vmstat and watched memory free fall until the app
started
swapping
and couldn't handle the load anymore. If I waited long enough
(like
3-8
minutes), the memory was eventually freed.

So I did a memory profile and reproduced the test locally long
enough
to
produce some data. I wasn't sure the best way to do this for when
to
call
profile so i hooked the write out of the memprofile to when I
send
a
signal
to the app (after hammering it for a little while). It shows the
biggest
culprit to be bufio.NewReader and the caller listed in
kcachegrind
is
net/http.(*Server).newConn

When I tried to dumb down the test with a stripped down handler
that
just
does `fmt.Fprintf(w, "hello")`, I noticed memory tank in much
tinier
increments and it seemed to bottom out after 3 or 4 mb.

My guess is my actual app is writing a lot more data out (writing
out
html, serving css, js, images with FileServe) so each connection
is
consuming more memory, but I'm unsure at what point the GC is
kicking
in and
freeing memory not being used anymore. Like I said, if I wait
long
enough
without any new requests then all memory is eventually recovered.

My question is, if my premise above is correct, what are my
options
besides throwing more memory at the system to sustain a targeted
request
rate. Any insight on things I'm ignorant of here would be much
appreciated
as well.

Thanks

Search Discussions

  • Paul at Sep 11, 2012 at 3:31 am
    I don't know if golang-tip includes these patches by default or not:
    http://codereview.appspot.com/6441097/
    http://codereview.appspot.com/6460108/#ps10001
    for additional reference see the last post in this thread:
    https://groups.google.com/forum/?fromgroups=#!searchin/golang-nuts/dmitry/golang-nuts/hkd5fjWIGmY/s8kNen0eKr0J

    I think that is worthwhile knowing about in the context of this thread.


    On Tuesday, September 11, 2012 3:41:32 AM UTC+2, Daniel Skinner wrote:

    I built against tip tonight and just as a preliminary result, I see much
    better performance all around. The app is handling more requests per second
    and memory usage isn't spiraling out of control. There might be some other
    factors at play here giving me bad results for the long test, but the short
    ones I've run make me happy. Ill post here if I have anything further to
    add after additional testing.

    Thanks all

    On Tue, Aug 28, 2012 at 7:31 PM, Dave Cheney <da...@cheney.net<javascript:>
    wrote:
    Any updates ?
    On Mon, Aug 27, 2012 at 11:54 PM, Daniel Skinner wrote:
    interesting! I'll check that out when I have a moment


    On Mon, Aug 27, 2012 at 8:47 AM, Dave Cheney <da...@cheney.net<javascript:>>
    wrote:
    Try searching the mailing life for references to GOGC. The default
    value
    is 100 which means the next gc due to the current heap filling up will
    occur
    at +100% of the current value. if you are looking in the code, it is
    gcpercent in runtime/mgc0.c

    Values smaller than 100 may slow the growth of the heap.

    Cheers

    Dave

    On 27/08/2012, at 23:32, Patrick Mylund Nielsen
    <pat...@patrickmylund.com <javascript:>> wrote:
    FWIW I've had a similar problem that I've never been able to solve--I
    would read ~20KB/1MB byte arrays for every request, and then throw
    them away. Even though each request didn't leave any stray goroutines
    or anything like that, memory usage would just keep increasing
    (beyond
    10GB used, over a span of hours.) First I just attributed it to the
    GC
    not being compacting, but there's probably something else going on.
    Whether it's in net/http or my app, I can't tell, but your problem
    sure sounds familiar.

    On Mon, Aug 27, 2012 at 3:26 PM, Daniel Skinner <daniel@dasa.cc>
    wrote:
    exactly, in my case I'm delivering twitter bootstrap resources and
    have
    a
    custom template solution that integrates with html/template, all
    being
    done
    in go. Best case I've seen so far was around 697 req/s on the
    cheapest
    linode server which is phenomenal compared to what I'm coming from
    (python).
    I just can't sustain that amount of traffic for too long b/c of when
    the
    garbage being generated is getting collected, and once the app
    starts
    swapping the req/s fall to around 50.

    The memory profile is pointing at the size of each connection and
    the
    memory
    is recovered eventually so that's great for bursts of traffic. I'm
    going to
    dig in to the internals regardless and see what I can do to sustain
    as
    high
    a rate as possible. But this is something I'm doing on the side so
    I'll
    have
    to dig into it again a bit later.


    On Mon, Aug 27, 2012 at 8:17 AM, Patrick Mylund Nielsen
    <pat...@patrickmylund.com <javascript:>> wrote:
    Not for an application that does a large amount of processing for
    each
    request. Keep in mind that basically every framework for every
    high-level language, e.g. Python, Ruby, benches far below 600 req/s
    even with their "hello world" pages, with "normal" settings.

    You'll easily get 20k-40k req/s with an app that doesn't do a lot
    of
    computation, or which caches the results, with net/http and
    GOMAXPROXS
    1.
    On Mon, Aug 27, 2012 at 11:50 AM, tomwilde <sedevel...@gmail.com<javascript:>
    wrote:
    Is it just me or does anyone else find 600 req/s an extremely poor
    performance?

    @Anton: Is your application also affected by threading and
    blocking
    syscalls?

    Am Montag, 27. August 2012 05:16:43 UTC+2 schrieb Anton Ageev:
    Hello Daniel.

    I run in a production https://github.com/antage/cdnstats for a
    weeks
    without restarts and any memory leaks. It processes 600-800
    req/s.
    My advices:
    1) If you want to know a memory consumption then look at RSS
    column
    in
    `top` or `ps` command. `Free memory` is very ambiguous parameter
    on
    Linux.
    2) Check you use golang >= 1.0.2.
    3) Check you don't reference any fields in http.Request. While
    you
    hold
    any reference (slice, string, pointers, etc) to http.Request
    data,
    it
    can't
    be freed by GC. Use builtin.copy if you want to store some data
    from
    a
    request.

    On Sunday, August 26, 2012 7:32:33 AM UTC+4, Daniel Skinner
    wrote:
    (first off, i have no experience with the volume of traffic i
    mention)

    I have a web application I'm working on and I did some
    rudimentary
    tests
    with httperf to check performance. Making 600 requests a
    second, I
    was
    watching vmstat and watched memory free fall until the app
    started
    swapping
    and couldn't handle the load anymore. If I waited long enough
    (like
    3-8
    minutes), the memory was eventually freed.

    So I did a memory profile and reproduced the test locally long
    enough
    to
    produce some data. I wasn't sure the best way to do this for
    when
    to
    call
    profile so i hooked the write out of the memprofile to when I
    send
    a
    signal
    to the app (after hammering it for a little while). It shows the
    biggest
    culprit to be bufio.NewReader and the caller listed in
    kcachegrind
    is
    net/http.(*Server).newConn

    When I tried to dumb down the test with a stripped down handler
    that
    just
    does `fmt.Fprintf(w, "hello")`, I noticed memory tank in much
    tinier
    increments and it seemed to bottom out after 3 or 4 mb.

    My guess is my actual app is writing a lot more data out
    (writing
    out
    html, serving css, js, images with FileServe) so each
    connection is
    consuming more memory, but I'm unsure at what point the GC is
    kicking
    in and
    freeing memory not being used anymore. Like I said, if I wait
    long
    enough
    without any new requests then all memory is eventually
    recovered.
    My question is, if my premise above is correct, what are my
    options
    besides throwing more memory at the system to sustain a targeted
    request
    rate. Any insight on things I'm ignorant of here would be much
    appreciated
    as well.

    Thanks
  • Daniel Skinner at Sep 11, 2012 at 1:46 pm
    well, tested against wrong uri on my end. I may have been a little too
    eager to test against tip after a long day and time away. Situation is the
    same with memory.

    Regarding patch sets, just looking at the src I pulled down and a couple
    items from that first patch set, doesn't look like it's in tip but that
    sounds like great work.
    On Mon, Sep 10, 2012 at 10:31 PM, Paul wrote:

    I don't know if golang-tip includes these patches by default or not:
    http://codereview.appspot.com/6441097/
    http://codereview.appspot.com/6460108/#ps10001
    for additional reference see the last post in this thread:

    https://groups.google.com/forum/?fromgroups=#!searchin/golang-nuts/dmitry/golang-nuts/hkd5fjWIGmY/s8kNen0eKr0J

    I think that is worthwhile knowing about in the context of this thread.


    On Tuesday, September 11, 2012 3:41:32 AM UTC+2, Daniel Skinner wrote:

    I built against tip tonight and just as a preliminary result, I see much
    better performance all around. The app is handling more requests per second
    and memory usage isn't spiraling out of control. There might be some other
    factors at play here giving me bad results for the long test, but the short
    ones I've run make me happy. Ill post here if I have anything further to
    add after additional testing.

    Thanks all
    On Tue, Aug 28, 2012 at 7:31 PM, Dave Cheney wrote:

    Any updates ?
    On Mon, Aug 27, 2012 at 11:54 PM, Daniel Skinner wrote:
    interesting! I'll check that out when I have a moment

    On Mon, Aug 27, 2012 at 8:47 AM, Dave Cheney wrote:

    Try searching the mailing life for references to GOGC. The default
    value
    is 100 which means the next gc due to the current heap filling up
    will occur
    at +100% of the current value. if you are looking in the code, it is
    gcpercent in runtime/mgc0.c

    Values smaller than 100 may slow the growth of the heap.

    Cheers

    Dave

    On 27/08/2012, at 23:32, Patrick Mylund Nielsen
    wrote:
    FWIW I've had a similar problem that I've never been able to
    solve--I
    would read ~20KB/1MB byte arrays for every request, and then throw
    them away. Even though each request didn't leave any stray
    goroutines
    or anything like that, memory usage would just keep increasing
    (beyond
    10GB used, over a span of hours.) First I just attributed it to the
    GC
    not being compacting, but there's probably something else going on.
    Whether it's in net/http or my app, I can't tell, but your problem
    sure sounds familiar.

    On Mon, Aug 27, 2012 at 3:26 PM, Daniel Skinner <daniel@dasa.cc>
    wrote:
    exactly, in my case I'm delivering twitter bootstrap resources and
    have
    a
    custom template solution that integrates with html/template, all
    being
    done
    in go. Best case I've seen so far was around 697 req/s on the
    cheapest
    linode server which is phenomenal compared to what I'm coming from
    (python).
    I just can't sustain that amount of traffic for too long b/c of
    when
    the
    garbage being generated is getting collected, and once the app
    starts
    swapping the req/s fall to around 50.

    The memory profile is pointing at the size of each connection and
    the
    memory
    is recovered eventually so that's great for bursts of traffic. I'm
    going to
    dig in to the internals regardless and see what I can do to
    sustain as
    high
    a rate as possible. But this is something I'm doing on the side so
    I'll
    have
    to dig into it again a bit later.


    On Mon, Aug 27, 2012 at 8:17 AM, Patrick Mylund Nielsen
    wrote:
    Not for an application that does a large amount of processing for
    each
    request. Keep in mind that basically every framework for every
    high-level language, e.g. Python, Ruby, benches far below 600
    req/s
    even with their "hello world" pages, with "normal" settings.

    You'll easily get 20k-40k req/s with an app that doesn't do a lot
    of
    computation, or which caches the results, with net/http and
    GOMAXPROXS
    1.
    On Mon, Aug 27, 2012 at 11:50 AM, tomwilde <sedevel...@gmail.com>
    wrote:
    Is it just me or does anyone else find 600 req/s an extremely
    poor
    performance?

    @Anton: Is your application also affected by threading and
    blocking
    syscalls?

    Am Montag, 27. August 2012 05:16:43 UTC+2 schrieb Anton Ageev:
    Hello Daniel.

    I run in a production https://github.com/antage/**cdnstats<https://github.com/antage/cdnstats>for a weeks
    without restarts and any memory leaks. It processes 600-800
    req/s.
    My advices:
    1) If you want to know a memory consumption then look at RSS
    column
    in
    `top` or `ps` command. `Free memory` is very ambiguous
    parameter on
    Linux.
    2) Check you use golang >= 1.0.2.
    3) Check you don't reference any fields in http.Request. While
    you
    hold
    any reference (slice, string, pointers, etc) to http.Request
    data,
    it
    can't
    be freed by GC. Use builtin.copy if you want to store some data
    from
    a
    request.

    On Sunday, August 26, 2012 7:32:33 AM UTC+4, Daniel Skinner
    wrote:
    (first off, i have no experience with the volume of traffic i
    mention)

    I have a web application I'm working on and I did some
    rudimentary
    tests
    with httperf to check performance. Making 600 requests a
    second, I
    was
    watching vmstat and watched memory free fall until the app
    started
    swapping
    and couldn't handle the load anymore. If I waited long enough
    (like
    3-8
    minutes), the memory was eventually freed.

    So I did a memory profile and reproduced the test locally long
    enough
    to
    produce some data. I wasn't sure the best way to do this for
    when
    to
    call
    profile so i hooked the write out of the memprofile to when I
    send
    a
    signal
    to the app (after hammering it for a little while). It shows
    the
    biggest
    culprit to be bufio.NewReader and the caller listed in
    kcachegrind
    is
    net/http.(*Server).newConn

    When I tried to dumb down the test with a stripped down handler
    that
    just
    does `fmt.Fprintf(w, "hello")`, I noticed memory tank in much
    tinier
    increments and it seemed to bottom out after 3 or 4 mb.

    My guess is my actual app is writing a lot more data out
    (writing
    out
    html, serving css, js, images with FileServe) so each
    connection is
    consuming more memory, but I'm unsure at what point the GC is
    kicking
    in and
    freeing memory not being used anymore. Like I said, if I wait
    long
    enough
    without any new requests then all memory is eventually
    recovered.
    My question is, if my premise above is correct, what are my
    options
    besides throwing more memory at the system to sustain a
    targeted
    request
    rate. Any insight on things I'm ignorant of here would be much
    appreciated
    as well.

    Thanks
  • Daniel Skinner at Sep 11, 2012 at 1:27 pm
    Ran through use of GOGC and tested.

    GOGC=off
    Baseline, ate through all my memory in one test, 290mb used

    GOGC=100
    confirm norm behaviour, 39mb used

    GOGC=20
    normal performance, 35mb used

    GOGC=1
    performance cut in half, 27mb used

    This env var seems to play to the tune of what's cleaned up immediately
    after a request. In all these cases (besides `off`), that used memory
    (27-39mb) is freed five or ten minutes later, I haven't timed this portion.
    On Tue, Aug 28, 2012 at 7:31 PM, Dave Cheney wrote:

    Any updates ?
    On Mon, Aug 27, 2012 at 11:54 PM, Daniel Skinner wrote:
    interesting! I'll check that out when I have a moment

    On Mon, Aug 27, 2012 at 8:47 AM, Dave Cheney wrote:

    Try searching the mailing life for references to GOGC. The default value
    is 100 which means the next gc due to the current heap filling up will
    occur
    at +100% of the current value. if you are looking in the code, it is
    gcpercent in runtime/mgc0.c

    Values smaller than 100 may slow the growth of the heap.

    Cheers

    Dave

    On 27/08/2012, at 23:32, Patrick Mylund Nielsen
    wrote:
    FWIW I've had a similar problem that I've never been able to solve--I
    would read ~20KB/1MB byte arrays for every request, and then throw
    them away. Even though each request didn't leave any stray goroutines
    or anything like that, memory usage would just keep increasing (beyond
    10GB used, over a span of hours.) First I just attributed it to the GC
    not being compacting, but there's probably something else going on.
    Whether it's in net/http or my app, I can't tell, but your problem
    sure sounds familiar.
    On Mon, Aug 27, 2012 at 3:26 PM, Daniel Skinner wrote:
    exactly, in my case I'm delivering twitter bootstrap resources and
    have
    a
    custom template solution that integrates with html/template, all
    being
    done
    in go. Best case I've seen so far was around 697 req/s on the
    cheapest
    linode server which is phenomenal compared to what I'm coming from
    (python).
    I just can't sustain that amount of traffic for too long b/c of when
    the
    garbage being generated is getting collected, and once the app starts
    swapping the req/s fall to around 50.

    The memory profile is pointing at the size of each connection and the
    memory
    is recovered eventually so that's great for bursts of traffic. I'm
    going to
    dig in to the internals regardless and see what I can do to sustain
    as
    high
    a rate as possible. But this is something I'm doing on the side so
    I'll
    have
    to dig into it again a bit later.


    On Mon, Aug 27, 2012 at 8:17 AM, Patrick Mylund Nielsen
    wrote:
    Not for an application that does a large amount of processing for
    each
    request. Keep in mind that basically every framework for every
    high-level language, e.g. Python, Ruby, benches far below 600 req/s
    even with their "hello world" pages, with "normal" settings.

    You'll easily get 20k-40k req/s with an app that doesn't do a lot of
    computation, or which caches the results, with net/http and
    GOMAXPROXS
    1.
    On Mon, Aug 27, 2012 at 11:50 AM, tomwilde <
    sedevelopers01@gmail.com>
    wrote:
    Is it just me or does anyone else find 600 req/s an extremely poor
    performance?

    @Anton: Is your application also affected by threading and blocking
    syscalls?

    Am Montag, 27. August 2012 05:16:43 UTC+2 schrieb Anton Ageev:
    Hello Daniel.

    I run in a production https://github.com/antage/cdnstats for a
    weeks
    without restarts and any memory leaks. It processes 600-800 req/s.

    My advices:
    1) If you want to know a memory consumption then look at RSS
    column
    in
    `top` or `ps` command. `Free memory` is very ambiguous parameter
    on
    Linux.
    2) Check you use golang >= 1.0.2.
    3) Check you don't reference any fields in http.Request. While you
    hold
    any reference (slice, string, pointers, etc) to http.Request data,
    it
    can't
    be freed by GC. Use builtin.copy if you want to store some data
    from
    a
    request.
    On Sunday, August 26, 2012 7:32:33 AM UTC+4, Daniel Skinner wrote:

    (first off, i have no experience with the volume of traffic i
    mention)

    I have a web application I'm working on and I did some
    rudimentary
    tests
    with httperf to check performance. Making 600 requests a second,
    I
    was
    watching vmstat and watched memory free fall until the app
    started
    swapping
    and couldn't handle the load anymore. If I waited long enough
    (like
    3-8
    minutes), the memory was eventually freed.

    So I did a memory profile and reproduced the test locally long
    enough
    to
    produce some data. I wasn't sure the best way to do this for when
    to
    call
    profile so i hooked the write out of the memprofile to when I
    send
    a
    signal
    to the app (after hammering it for a little while). It shows the
    biggest
    culprit to be bufio.NewReader and the caller listed in
    kcachegrind
    is
    net/http.(*Server).newConn

    When I tried to dumb down the test with a stripped down handler
    that
    just
    does `fmt.Fprintf(w, "hello")`, I noticed memory tank in much
    tinier
    increments and it seemed to bottom out after 3 or 4 mb.

    My guess is my actual app is writing a lot more data out (writing
    out
    html, serving css, js, images with FileServe) so each connection
    is
    consuming more memory, but I'm unsure at what point the GC is
    kicking
    in and
    freeing memory not being used anymore. Like I said, if I wait
    long
    enough
    without any new requests then all memory is eventually recovered.

    My question is, if my premise above is correct, what are my
    options
    besides throwing more memory at the system to sustain a targeted
    request
    rate. Any insight on things I'm ignorant of here would be much
    appreciated
    as well.

    Thanks
  • Daniel Skinner at Sep 11, 2012 at 3:54 pm
    I want to clarify my original position again.

    My bad test last night was for a 404, so the same experience as noted was
    happening but at a smaller scale.

    As stephen noted earlier,

    "8kB of garbage is created by net/http.(*Server).newConn for each
    connection. My guess is that the GC is not running often enough and
    that is causing the memory problem."

    Certainly there is a great deal of garbage being collected, but some of it
    is not getting collected right away. I'd sooner blame my own code but
    creating a mem profile just pointed fingers at "the biggest culprit to be
    bufio.NewReader and the caller listed in kcachegrind is
    net/http.(*Server).newConn".

    My guess was that since the response size was larger for a normal request,
    this was contributing to that immediately unfreed memory. I plan to test
    this out by just doing a barebones handler that sends a large response but
    any tips on tracking down this memory usage better as detailed in my
    original post would be much appreciated.

    If this just boils down to net/http does what it does and I'll have to
    account for that when determining what kind of load can be handled, I may
    dig into the package but I'd still imagine it does what it does for a good
    reason.

    Thanks
    On Tue, Sep 11, 2012 at 8:19 AM, Daniel Skinner wrote:

    Ran through use of GOGC and tested.

    GOGC=off
    Baseline, ate through all my memory in one test, 290mb used

    GOGC=100
    confirm norm behaviour, 39mb used

    GOGC=20
    normal performance, 35mb used

    GOGC=1
    performance cut in half, 27mb used

    This env var seems to play to the tune of what's cleaned up immediately
    after a request. In all these cases (besides `off`), that used memory
    (27-39mb) is freed five or ten minutes later, I haven't timed this portion.
    On Tue, Aug 28, 2012 at 7:31 PM, Dave Cheney wrote:

    Any updates ?
    On Mon, Aug 27, 2012 at 11:54 PM, Daniel Skinner wrote:
    interesting! I'll check that out when I have a moment

    On Mon, Aug 27, 2012 at 8:47 AM, Dave Cheney wrote:

    Try searching the mailing life for references to GOGC. The default
    value
    is 100 which means the next gc due to the current heap filling up will
    occur
    at +100% of the current value. if you are looking in the code, it is
    gcpercent in runtime/mgc0.c

    Values smaller than 100 may slow the growth of the heap.

    Cheers

    Dave

    On 27/08/2012, at 23:32, Patrick Mylund Nielsen
    wrote:
    FWIW I've had a similar problem that I've never been able to solve--I
    would read ~20KB/1MB byte arrays for every request, and then throw
    them away. Even though each request didn't leave any stray goroutines
    or anything like that, memory usage would just keep increasing
    (beyond
    10GB used, over a span of hours.) First I just attributed it to the
    GC
    not being compacting, but there's probably something else going on.
    Whether it's in net/http or my app, I can't tell, but your problem
    sure sounds familiar.

    On Mon, Aug 27, 2012 at 3:26 PM, Daniel Skinner <daniel@dasa.cc>
    wrote:
    exactly, in my case I'm delivering twitter bootstrap resources and
    have
    a
    custom template solution that integrates with html/template, all
    being
    done
    in go. Best case I've seen so far was around 697 req/s on the
    cheapest
    linode server which is phenomenal compared to what I'm coming from
    (python).
    I just can't sustain that amount of traffic for too long b/c of when
    the
    garbage being generated is getting collected, and once the app
    starts
    swapping the req/s fall to around 50.

    The memory profile is pointing at the size of each connection and
    the
    memory
    is recovered eventually so that's great for bursts of traffic. I'm
    going to
    dig in to the internals regardless and see what I can do to sustain
    as
    high
    a rate as possible. But this is something I'm doing on the side so
    I'll
    have
    to dig into it again a bit later.


    On Mon, Aug 27, 2012 at 8:17 AM, Patrick Mylund Nielsen
    wrote:
    Not for an application that does a large amount of processing for
    each
    request. Keep in mind that basically every framework for every
    high-level language, e.g. Python, Ruby, benches far below 600 req/s
    even with their "hello world" pages, with "normal" settings.

    You'll easily get 20k-40k req/s with an app that doesn't do a lot
    of
    computation, or which caches the results, with net/http and
    GOMAXPROXS
    1.
    On Mon, Aug 27, 2012 at 11:50 AM, tomwilde <
    sedevelopers01@gmail.com>
    wrote:
    Is it just me or does anyone else find 600 req/s an extremely poor
    performance?

    @Anton: Is your application also affected by threading and
    blocking
    syscalls?

    Am Montag, 27. August 2012 05:16:43 UTC+2 schrieb Anton Ageev:
    Hello Daniel.

    I run in a production https://github.com/antage/cdnstats for a
    weeks
    without restarts and any memory leaks. It processes 600-800
    req/s.
    My advices:
    1) If you want to know a memory consumption then look at RSS
    column
    in
    `top` or `ps` command. `Free memory` is very ambiguous parameter
    on
    Linux.
    2) Check you use golang >= 1.0.2.
    3) Check you don't reference any fields in http.Request. While
    you
    hold
    any reference (slice, string, pointers, etc) to http.Request
    data,
    it
    can't
    be freed by GC. Use builtin.copy if you want to store some data
    from
    a
    request.

    On Sunday, August 26, 2012 7:32:33 AM UTC+4, Daniel Skinner
    wrote:
    (first off, i have no experience with the volume of traffic i
    mention)

    I have a web application I'm working on and I did some
    rudimentary
    tests
    with httperf to check performance. Making 600 requests a
    second, I
    was
    watching vmstat and watched memory free fall until the app
    started
    swapping
    and couldn't handle the load anymore. If I waited long enough
    (like
    3-8
    minutes), the memory was eventually freed.

    So I did a memory profile and reproduced the test locally long
    enough
    to
    produce some data. I wasn't sure the best way to do this for
    when
    to
    call
    profile so i hooked the write out of the memprofile to when I
    send
    a
    signal
    to the app (after hammering it for a little while). It shows the
    biggest
    culprit to be bufio.NewReader and the caller listed in
    kcachegrind
    is
    net/http.(*Server).newConn

    When I tried to dumb down the test with a stripped down handler
    that
    just
    does `fmt.Fprintf(w, "hello")`, I noticed memory tank in much
    tinier
    increments and it seemed to bottom out after 3 or 4 mb.

    My guess is my actual app is writing a lot more data out
    (writing
    out
    html, serving css, js, images with FileServe) so each
    connection is
    consuming more memory, but I'm unsure at what point the GC is
    kicking
    in and
    freeing memory not being used anymore. Like I said, if I wait
    long
    enough
    without any new requests then all memory is eventually
    recovered.
    My question is, if my premise above is correct, what are my
    options
    besides throwing more memory at the system to sustain a targeted
    request
    rate. Any insight on things I'm ignorant of here would be much
    appreciated
    as well.

    Thanks
  • Dave Cheney at Sep 11, 2012 at 10:42 pm
    Issues 4028 and 4031 may be of interest. I don't think we can be of
    more assistance unless you can show some code which exhibits this poor
    behaviour.

    Cheers

    Dave
    On Wed, Sep 12, 2012 at 1:54 AM, Daniel Skinner wrote:
    I want to clarify my original position again.

    My bad test last night was for a 404, so the same experience as noted was
    happening but at a smaller scale.

    As stephen noted earlier,

    "8kB of garbage is created by net/http.(*Server).newConn for each
    connection. My guess is that the GC is not running often enough and
    that is causing the memory problem."

    Certainly there is a great deal of garbage being collected, but some of it
    is not getting collected right away. I'd sooner blame my own code but
    creating a mem profile just pointed fingers at "the biggest culprit to be
    bufio.NewReader and the caller listed in kcachegrind is
    net/http.(*Server).newConn".

    My guess was that since the response size was larger for a normal request,
    this was contributing to that immediately unfreed memory. I plan to test
    this out by just doing a barebones handler that sends a large response but
    any tips on tracking down this memory usage better as detailed in my
    original post would be much appreciated.

    If this just boils down to net/http does what it does and I'll have to
    account for that when determining what kind of load can be handled, I may
    dig into the package but I'd still imagine it does what it does for a good
    reason.

    Thanks

    On Tue, Sep 11, 2012 at 8:19 AM, Daniel Skinner wrote:

    Ran through use of GOGC and tested.

    GOGC=off
    Baseline, ate through all my memory in one test, 290mb used

    GOGC=100
    confirm norm behaviour, 39mb used

    GOGC=20
    normal performance, 35mb used

    GOGC=1
    performance cut in half, 27mb used

    This env var seems to play to the tune of what's cleaned up immediately
    after a request. In all these cases (besides `off`), that used memory
    (27-39mb) is freed five or ten minutes later, I haven't timed this portion.
    On Tue, Aug 28, 2012 at 7:31 PM, Dave Cheney wrote:

    Any updates ?
    On Mon, Aug 27, 2012 at 11:54 PM, Daniel Skinner wrote:
    interesting! I'll check that out when I have a moment

    On Mon, Aug 27, 2012 at 8:47 AM, Dave Cheney wrote:

    Try searching the mailing life for references to GOGC. The default
    value
    is 100 which means the next gc due to the current heap filling up will
    occur
    at +100% of the current value. if you are looking in the code, it is
    gcpercent in runtime/mgc0.c

    Values smaller than 100 may slow the growth of the heap.

    Cheers

    Dave

    On 27/08/2012, at 23:32, Patrick Mylund Nielsen
    wrote:
    FWIW I've had a similar problem that I've never been able to
    solve--I
    would read ~20KB/1MB byte arrays for every request, and then throw
    them away. Even though each request didn't leave any stray
    goroutines
    or anything like that, memory usage would just keep increasing
    (beyond
    10GB used, over a span of hours.) First I just attributed it to the
    GC
    not being compacting, but there's probably something else going on.
    Whether it's in net/http or my app, I can't tell, but your problem
    sure sounds familiar.

    On Mon, Aug 27, 2012 at 3:26 PM, Daniel Skinner <daniel@dasa.cc>
    wrote:
    exactly, in my case I'm delivering twitter bootstrap resources and
    have
    a
    custom template solution that integrates with html/template, all
    being
    done
    in go. Best case I've seen so far was around 697 req/s on the
    cheapest
    linode server which is phenomenal compared to what I'm coming from
    (python).
    I just can't sustain that amount of traffic for too long b/c of
    when
    the
    garbage being generated is getting collected, and once the app
    starts
    swapping the req/s fall to around 50.

    The memory profile is pointing at the size of each connection and
    the
    memory
    is recovered eventually so that's great for bursts of traffic. I'm
    going to
    dig in to the internals regardless and see what I can do to sustain
    as
    high
    a rate as possible. But this is something I'm doing on the side so
    I'll
    have
    to dig into it again a bit later.


    On Mon, Aug 27, 2012 at 8:17 AM, Patrick Mylund Nielsen
    wrote:
    Not for an application that does a large amount of processing for
    each
    request. Keep in mind that basically every framework for every
    high-level language, e.g. Python, Ruby, benches far below 600
    req/s
    even with their "hello world" pages, with "normal" settings.

    You'll easily get 20k-40k req/s with an app that doesn't do a lot
    of
    computation, or which caches the results, with net/http and
    GOMAXPROXS
    1.
    On Mon, Aug 27, 2012 at 11:50 AM, tomwilde
    <sedevelopers01@gmail.com>
    wrote:
    Is it just me or does anyone else find 600 req/s an extremely
    poor
    performance?

    @Anton: Is your application also affected by threading and
    blocking
    syscalls?

    Am Montag, 27. August 2012 05:16:43 UTC+2 schrieb Anton Ageev:
    Hello Daniel.

    I run in a production https://github.com/antage/cdnstats for a
    weeks
    without restarts and any memory leaks. It processes 600-800
    req/s.

    My advices:
    1) If you want to know a memory consumption then look at RSS
    column
    in
    `top` or `ps` command. `Free memory` is very ambiguous parameter
    on
    Linux.
    2) Check you use golang >= 1.0.2.
    3) Check you don't reference any fields in http.Request. While
    you
    hold
    any reference (slice, string, pointers, etc) to http.Request
    data,
    it
    can't
    be freed by GC. Use builtin.copy if you want to store some data
    from
    a
    request.

    On Sunday, August 26, 2012 7:32:33 AM UTC+4, Daniel Skinner
    wrote:
    (first off, i have no experience with the volume of traffic i
    mention)

    I have a web application I'm working on and I did some
    rudimentary
    tests
    with httperf to check performance. Making 600 requests a
    second, I
    was
    watching vmstat and watched memory free fall until the app
    started
    swapping
    and couldn't handle the load anymore. If I waited long enough
    (like
    3-8
    minutes), the memory was eventually freed.

    So I did a memory profile and reproduced the test locally long
    enough
    to
    produce some data. I wasn't sure the best way to do this for
    when
    to
    call
    profile so i hooked the write out of the memprofile to when I
    send
    a
    signal
    to the app (after hammering it for a little while). It shows
    the
    biggest
    culprit to be bufio.NewReader and the caller listed in
    kcachegrind
    is
    net/http.(*Server).newConn

    When I tried to dumb down the test with a stripped down handler
    that
    just
    does `fmt.Fprintf(w, "hello")`, I noticed memory tank in much
    tinier
    increments and it seemed to bottom out after 3 or 4 mb.

    My guess is my actual app is writing a lot more data out
    (writing
    out
    html, serving css, js, images with FileServe) so each
    connection is
    consuming more memory, but I'm unsure at what point the GC is
    kicking
    in and
    freeing memory not being used anymore. Like I said, if I wait
    long
    enough
    without any new requests then all memory is eventually
    recovered.

    My question is, if my premise above is correct, what are my
    options
    besides throwing more memory at the system to sustain a
    targeted
    request
    rate. Any insight on things I'm ignorant of here would be much
    appreciated
    as well.

    Thanks

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedSep 11, '12 at 1:41a
activeSep 11, '12 at 10:42p
posts6
users3
websitegolang.org

People

Translate

site design / logo © 2021 Grokbase