FAQ
hi all
     I had a simple http server application written in golang. There are
three servers behind a vip load balance server.

     At first I deployed the go server direct on 8080 port, this way the
incoming traffic directly hit the go server. The memory of my go server
soon increasing unreasonablely to 400~500M, and the response time become
very high, and many "timed out" "couln't connect to server" appeared. But
the cpu usage is very low. It seemed all the go routines were blocking
somehow, and the server was no longer accepting new requests.

     Then I deploy a nginx server at each machine on 8080 port, then proxy
pass the request to my go server at 127.0.0.1:8081. This time the go server
works fine, memory usage is about 20M, cpu is about 5%, and rt about 5ms.

     Because I can't debug on the online environment, I wonder what was
really happening in the first sulotion? Is there a known bug?

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
For more options, visit https://groups.google.com/d/optout.

Search Discussions

  • James Bardin at Oct 23, 2014 at 8:36 pm

    On Thursday, October 23, 2014 3:18:07 AM UTC-4, Kenneth Tse wrote:
    hi all
    I had a simple http server application written in golang. There are
    three servers behind a vip load balance server.
    What exactly do you mean by a VIP Load Balancer (Reverse Proxy, TPROXY, LVS
    NAT, LVS Direct Routing...?), and how is it configured?

    If nginx fixed the issue, is it using HTTP/1.0 or HTTP/1.1 for the backend?

    My guess is that something is preventing the connections from closing, and
    they just continue to collect in a KeepAlive state.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/d/optout.
  • Dave Cheney at Oct 24, 2014 at 6:10 am
    For a long time nginx only spoke HTTP/1.0 [1] to the backends, that is,
    each backend connection would be closed at the completion of the request.
    As nginx is the frontend proxy, and is studious in applying timeouts in all
    forms, when it ditched the incoming connection for a timeout violation it
    will also close the backend -- hence, no resource leaks.

    For better or for worse, the default behavior of the Go http server is to
    not apply any timeouts. This is usually where the problem comes from.

    [1]. Nitpickers note, this isn't true any more, but IMO the vast number of
    nginx configurations continue to utilise the legacy HTTP/1.0 behaviour
    described above.
    On Friday, 24 October 2014 07:36:41 UTC+11, James Bardin wrote:


    On Thursday, October 23, 2014 3:18:07 AM UTC-4, Kenneth Tse wrote:

    hi all
    I had a simple http server application written in golang. There are
    three servers behind a vip load balance server.
    What exactly do you mean by a VIP Load Balancer (Reverse Proxy, TPROXY,
    LVS NAT, LVS Direct Routing...?), and how is it configured?

    If nginx fixed the issue, is it using HTTP/1.0 or HTTP/1.1 for the backend?

    My guess is that something is preventing the connections from closing, and
    they just continue to collect in a KeepAlive state.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedOct 23, '14 at 7:18a
activeOct 24, '14 at 6:10a
posts3
users3
websitegolang.org

People

Translate

site design / logo © 2023 Grokbase