FAQ
I am new to Go and trying to implement an http server using net/http -
which internally calls n http services using go-routines, responses from n
http server are written into buffered channel and then main http server
responds back.

The problem I am finding is -
- If MaxIdleConnections per host is low - then each connection after that
limit is not persistent. This results in lot of connections being
established and broken, when the already established connection could have
been reused (I understand that it exceeds the MaxIdleConnections limit).
- If I set MaxIdleConnections to very high number then I may exhaust the
port limit, since n can be high. Creating virtual network interfaces is
option - But I could not find how to bind ip to the http.Client.

I think, may be in highly network intensive tasks, it may be good to not to
immediately close the connections if its above MaxIdleConnections, instead
it should be tried to be reused. May be a basic algo to try to get it to
stable limit and then again close the connections, if they are no longer
needed. Or please suggest if such functionality can be achieved with
current interface.




--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

  • James Bardin at Aug 14, 2014 at 1:52 am

    On Wednesday, August 13, 2014 9:11:03 PM UTC-4, Suraj Narkhede wrote:

    - If I set MaxIdleConnections to very high number then I may exhaust the
    port limit, since n can be high. Creating virtual network interfaces is
    option - But I could not find how to bind ip to the http.Client.
    I think this is really the way to go. If the number you need for
    MaxIdleConnections to accommodate your peak concurrency is greater than the
    number of ports available, make more ports!

    You can set the local address in a net.Dialer, which can provide the Dial
    function for your client's http.Transport.

    http://golang.org/pkg/net/#Dialer
    http://golang.org/pkg/net/http/#Transport

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Suraj Narkhede at Aug 14, 2014 at 4:54 am
    Thanks for the links - will check that.

    The problem with creating virtual network interfaces is - our communication
    is outside private network and hence need public ips.

    I fee that, the usage of MaxIdleConnection should be bit different - at
    peak concurrency, when the connections exceed this limit, that generally
    means that connections are not idle and thus new connections shall not be
    immediately closed (as it degrades the performance, because we need to
    again establish connection immediately with same host, and thus too many
    connections goes into TIME_WAIT state, rapidly decreasing the available
    ports) and instead be cached. I think the optimization should be at the
    global connection pool. For the partners whose connections are idle, they
    can be closed, but one who is in demand now, shall scale upto the limits
    allowed - by closing the idle connections with other hosts.
    On Wednesday, August 13, 2014 6:44:08 PM UTC-7, James Bardin wrote:


    On Wednesday, August 13, 2014 9:11:03 PM UTC-4, Suraj Narkhede wrote:


    - If I set MaxIdleConnections to very high number then I may exhaust the
    port limit, since n can be high. Creating virtual network interfaces is
    option - But I could not find how to bind ip to the http.Client.
    I think this is really the way to go. If the number you need for
    MaxIdleConnections to accommodate your peak concurrency is greater than the
    number of ports available, make more ports!

    You can set the local address in a net.Dialer, which can provide the Dial
    function for your client's http.Transport.

    http://golang.org/pkg/net/#Dialer
    http://golang.org/pkg/net/http/#Transport
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Suraj Narkhede at Aug 14, 2014 at 4:57 am
    Thanks for the links - will check that.

    The problem with creating virtual network interfaces is - communication can
    be with outside world and then need public IP.

    I fee that, the usage of MaxIdleConnection should be bit different - at
    peak concurrency, when the connections exceed this limit, that generally
    means that connections are not idle and thus new connections shall not be
    immediately closed (as it degrades the performance, because we need to
    again establish connection immediately with same host, and thus too many
    connections goes into TIME_WAIT state, rapidly decreasing the available
    ports) and instead be cached. I think the optimization should be at the
    global connection pool. For the partners whose connections are idle, they
    can be closed, but one who is in demand now, shall scale upto the limits
    allowed - by closing the idle connections with other hosts.
    On Wednesday, August 13, 2014 6:44:08 PM UTC-7, James Bardin wrote:


    On Wednesday, August 13, 2014 9:11:03 PM UTC-4, Suraj Narkhede wrote:


    - If I set MaxIdleConnections to very high number then I may exhaust the
    port limit, since n can be high. Creating virtual network interfaces is
    option - But I could not find how to bind ip to the http.Client.
    I think this is really the way to go. If the number you need for
    MaxIdleConnections to accommodate your peak concurrency is greater than the
    number of ports available, make more ports!

    You can set the local address in a net.Dialer, which can provide the Dial
    function for your client's http.Transport.

    http://golang.org/pkg/net/#Dialer
    http://golang.org/pkg/net/http/#Transport
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Tamás Gulácsi at Aug 14, 2014 at 10:27 am
    Why not create such pooling yourself?

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • James Bardin at Aug 14, 2014 at 2:59 pm
    Are you certain that this is adversely affecting you? It seems like only a
    slight optimization of the current behavior, with a much more complication
    implementation (not to mention, it makes MaxIdleConnsPerHost not a hard
    limit as it would imply)

    That limit is specifically for "Idle" connections. If MaxIdleConnsPerHost
    is 10, and you have 1000 active connections, and 10 are released, you
    should have exactly 10 idle and waiting and 990 active. No [keepalive]
    connection is immediately closed if there are no idle connections already.



    On Thursday, August 14, 2014 6:27:15 AM UTC-4, Tamás Gulácsi wrote:

    Why not create such pooling yourself?
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Suraj Narkhede at Aug 14, 2014 at 8:05 pm

    On Thursday, August 14, 2014 7:59:21 AM UTC-7, James Bardin wrote:
    Are you certain that this is adversely affecting you? It seems like only a
    slight optimization of the current behavior, with a much more complication
    implementation (not to mention, it makes MaxIdleConnsPerHost not a hard
    limit as it would imply)

    That limit is specifically for "Idle" connections. If MaxIdleConnsPerHost
    is 10, and you have 1000 active connections, and 10 are released, you
    should have exactly 10 idle and waiting and 990 active. No [keepalive]
    connection is immediately closed if there are no idle connections already. -
    James, Thats what my expectation is and if it works that way, then thats
    great. But observation is different - during stress testing, till the total
    connections are below MaxIdleConnsPerHost, number of connections are
    constant, but as I increase the concurrency for stress testing, when the
    total connections from server to outside clients exceed
    MaxIdleConnsPerHost, the connections count explodes as most of them (above
    the limit) go in TIME_WAIT state and it does not become stable. Code
    - http://play.golang.org/p/yK6Knh8RkE.
    The backend services, are simple, accept the request, sleep for 80ms and
    send the response.

    I am using wrk to do stress testing. After initial set of testing
    connections were constant and 10000 -

    concurrency - a:
    ESTABLISHED=> 10000
    TIME_WAIT => 0

    concurrency - 2a:(By starting another wrk client):
    ESTABLISHED => 10000 -> 11000 (stable around here)
    TIME_WAIT: Increasing -> 1939 - > 3714 -> 4156 -> ..->15066 -> ...42000
    ->...

    My understanding is after some time - most of the connections should be
    established, and TIME_WAIT should be very low, since connections are not
    idle, as the stress testing is on.
    Please let me know if there is issue in the code, or this is expected the
    behavior.



    On Thursday, August 14, 2014 6:27:15 AM UTC-4, Tamás Gulácsi wrote:

    Why not create such pooling yourself?
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Suraj Narkhede at Aug 14, 2014 at 8:26 pm
    Came across - https://code.google.com/p/go/issues/detail?id=6785. Similar
    issue is already reported.
    On Thursday, August 14, 2014 1:05:12 PM UTC-7, Suraj Narkhede wrote:


    On Thursday, August 14, 2014 7:59:21 AM UTC-7, James Bardin wrote:

    Are you certain that this is adversely affecting you? It seems like only
    a slight optimization of the current behavior, with a much more
    complication implementation (not to mention, it makes MaxIdleConnsPerHost
    not a hard limit as it would imply)

    That limit is specifically for "Idle" connections. If MaxIdleConnsPerHost
    is 10, and you have 1000 active connections, and 10 are released, you
    should have exactly 10 idle and waiting and 990 active. No [keepalive]
    connection is immediately closed if there are no idle connections already. -
    James, Thats what my expectation is and if it works that way, then thats
    great. But observation is different - during stress testing, till the total
    connections are below MaxIdleConnsPerHost, number of connections are
    constant, but as I increase the concurrency for stress testing, when the
    total connections from server to outside clients exceed
    MaxIdleConnsPerHost, the connections count explodes as most of them (above
    the limit) go in TIME_WAIT state and it does not become stable. Code -
    http://play.golang.org/p/yK6Knh8RkE.
    The backend services, are simple, accept the request, sleep for 80ms and
    send the response.

    I am using wrk to do stress testing. After initial set of testing
    connections were constant and 10000 -

    concurrency - a:
    ESTABLISHED=> 10000
    TIME_WAIT => 0

    concurrency - 2a:(By starting another wrk client):
    ESTABLISHED => 10000 -> 11000 (stable around here)
    TIME_WAIT: Increasing -> 1939 - > 3714 -> 4156 -> ..->15066 -> ...42000
    ->...

    My understanding is after some time - most of the connections should be
    established, and TIME_WAIT should be very low, since connections are not
    idle, as the stress testing is on.
    Please let me know if there is issue in the code, or this is expected the
    behavior.



    On Thursday, August 14, 2014 6:27:15 AM UTC-4, Tamás Gulácsi wrote:

    Why not create such pooling yourself?
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • James Bardin at Aug 14, 2014 at 8:43 pm

    On Thu, Aug 14, 2014 at 4:25 PM, Suraj Narkhede wrote:

    Came across - https://code.google.com/p/go/issues/detail?id=6785. Similar
    issue is already reported.
    Ah, that makes sense.
    Luckily that's not likely to come up in real world use for most people.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Jakob Borg at Aug 15, 2014 at 9:05 am

    2014-08-14 22:43 GMT+02:00 James Bardin <j.bardin@gmail.com>:
    On Thu, Aug 14, 2014 at 4:25 PM, Suraj Narkhede wrote:

    Came across - https://code.google.com/p/go/issues/detail?id=6785. Similar
    issue is already reported.
    Ah, that makes sense.
    Luckily that's not likely to come up in real world use for most people.
    Indeed. I'm the original reporter; fixing it looked slightly
    nontrivial so I decided to let it slide for the time being, and as far
    as I know the problem has never popped up in production, just when
    attempting local benchmarks.

    //jb

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Suraj Narkhede at Aug 15, 2014 at 9:53 pm
    Thanks Jakob!

    Though currently I am just prototyping the system, I think, this issue can
    easily come into production - if
      - The backend services are slow.
    - You have to communicate with many backend services and hosts.

    Observation is - As soon as the MaxIdleConnsPerHost exceeds, the subsequent
    connections are not persistent and results into lot of connections in
    TIME_WAIT.

    Even a basic test case with MaxIdleConnsPerHost = 1 and stress testing with
    concurrency 2 results in > 5000 connections in TIME_WAIT.
    Setup:
    - Only one backend service is called. It responds after 5 ms of sleep.
    - Main HTTP server responds if timeout happens (10ms) or when response is
    received from backend service.
    - Concurrency test - weighttp -n 10000 -c 2 -t 1 -k
    "http://127.0.0.1:8084/".

    This basic test also results in > 5000 connections in TIME_WAIT. In
    practice, this test case should be handled in 2 persistent connections.
    When I changed MaxIdleConnsPerHost to 4, keeping everything else same, the
    net connections established to the backend service are 4.

    On Friday, August 15, 2014 2:05:31 AM UTC-7, Jakob Borg wrote:

    2014-08-14 22:43 GMT+02:00 James Bardin <j.ba...@gmail.com <javascript:>>:
    On Thu, Aug 14, 2014 at 4:25 PM, Suraj Narkhede <suraj...@gmail.com
    <javascript:>>
    wrote:
    Similar
    issue is already reported.
    Ah, that makes sense.
    Luckily that's not likely to come up in real world use for most people.
    Indeed. I'm the original reporter; fixing it looked slightly
    nontrivial so I decided to let it slide for the time being, and as far
    as I know the problem has never popped up in production, just when
    attempting local benchmarks.

    //jb
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Suraj Narkhede at Aug 15, 2014 at 10:18 pm
    Reported issue at - https://code.google.com/p/go/issues/detail?id=8536.
    Also mentioned the link
    for https://code.google.com/p/go/issues/detail?id=6785.
    I think there is a difference in two - earlier one asks for max limit on
    number of connections. This one asks for better utilization of the
    established connections.
    On Friday, August 15, 2014 2:53:29 PM UTC-7, Suraj Narkhede wrote:

    Thanks Jakob!

    Though currently I am just prototyping the system, I think, this issue can
    easily come into production - if
    - The backend services are slow.
    - You have to communicate with many backend services and hosts.

    Observation is - As soon as the MaxIdleConnsPerHost exceeds, the
    subsequent connections are not persistent and results into lot of
    connections in TIME_WAIT.

    Even a basic test case with MaxIdleConnsPerHost = 1 and stress testing
    with concurrency 2 results in > 5000 connections in TIME_WAIT.
    Setup:
    - Only one backend service is called. It responds after 5 ms of sleep.
    - Main HTTP server responds if timeout happens (10ms) or when response is
    received from backend service.
    - Concurrency test - weighttp -n 10000 -c 2 -t 1 -k "
    http://127.0.0.1:8084/".

    This basic test also results in > 5000 connections in TIME_WAIT. In
    practice, this test case should be handled in 2 persistent connections.
    When I changed MaxIdleConnsPerHost to 4, keeping everything else same, the
    net connections established to the backend service are 4.

    On Friday, August 15, 2014 2:05:31 AM UTC-7, Jakob Borg wrote:

    2014-08-14 22:43 GMT+02:00 James Bardin <j.ba...@gmail.com>:
    On Thu, Aug 14, 2014 at 4:25 PM, Suraj Narkhede <suraj...@gmail.com>
    wrote:
    Similar
    issue is already reported.
    Ah, that makes sense.
    Luckily that's not likely to come up in real world use for most people.
    Indeed. I'm the original reporter; fixing it looked slightly
    nontrivial so I decided to let it slide for the time being, and as far
    as I know the problem has never popped up in production, just when
    attempting local benchmarks.

    //jb
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • San at Aug 19, 2014 at 9:28 pm

    On Thursday, August 14, 2014 8:11:03 AM UTC+7, Suraj Narkhede wrote:
    I am new to Go and trying to implement an http server using net/http -
    which internally calls n http services using go-routines, responses from n
    http server are written into buffered channel and then main http server
    responds back.
    - If I set MaxIdleConnections to very high number then I may exhaust the
    port limit, since n can be high. Creating virtual network interfaces is
    option - But I could not find how to bind ip to the http.Client.
    This may not related to the issue in golang but You may want to look at
    good explanation about TCP tuple from CloudFlare.
    http://blog.cloudflare.com/cloudflare-now-supports-websockets
    This allow you to reuse outgoing port to n http servers you have just like
    an incoming port.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedAug 14, '14 at 1:11a
activeAug 19, '14 at 9:28p
posts13
users5
websitegolang.org

People

Translate

site design / logo © 2021 Grokbase