FAQ
For you folks who implement http servers that handle a lot
of concurrent traffic, how do you typically manage rate limiting
the number of concurrent connections you're willing to
accept?

I know one technique mentioned is the use of a buffered
channel that one writes to and reads from, in between
which you are handling the request. Do folks use that
for large scale servers?

Are there other better techniques to use with the go
http server libraries?

Jim

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Search Discussions

  • Robert Melton at Jun 14, 2013 at 8:06 pm

    On Fri, Jun 14, 2013 at 3:30 PM, Jim Robinson wrote:

    For you folks who implement http servers that handle a lot
    of concurrent traffic, how do you typically manage rate limiting
    the number of concurrent connections you're willing to
    accept?

    I know one technique mentioned is the use of a buffered
    channel that one writes to and reads from, in between
    which you are handling the request. Do folks use that
    for large scale servers?

    Are there other better techniques to use with the go
    http server libraries?
    In my experience, the reason for rate limiting tends to have major
    implications on implementation. Limiting for license reasons (either on a
    back-end license, or your own license limiting a customer), limiting for DB
    load, limiting for memory, limiting for cpu... multifaceted limiting. Do
    you have a specific use case?



    --
    Robert Melton

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • James A. Robinson at Jun 14, 2013 at 8:16 pm

    On Fri, Jun 14, 2013 at 1:05 PM, Robert Melton wrote:
    In my experience, the reason for rate limiting tends to have major
    implications on implementation. Limiting for license reasons
    (either on a back-end license, or your own license limiting a
    customer), limiting for DB load, limiting for memory, limiting for
    cpu... multifaceted limiting. Do you have a specific use case?
    Rate limiting to ensure quality of service. I want to set things up
    so that a health check can be used to determine if more servers need
    to be spun up and load balanced to.

    Basically I'm planning on the model of something like an F5 load
    balancer or nginx reverse proxy sitting in front of two or more
    instances of the go-based server. Once the go-based servers reach
    some point of saturation I want to prevent them from getting
    overloaded with processing new requests, and I want to detect the fact
    so that we can automate the spinning up of new go-based servers to
    balance to.

    Hopefully that makes sense. :)


    Jim

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Robert Melton at Jun 14, 2013 at 8:43 pm

    On Fri, Jun 14, 2013 at 4:15 PM, James A. Robinson wrote:
    On Fri, Jun 14, 2013 at 1:05 PM, Robert Melton wrote:
    In my experience, the reason for rate limiting tends to have major
    implications on implementation. Limiting for license reasons
    (either on a back-end license, or your own license limiting a
    customer), limiting for DB load, limiting for memory, limiting for
    cpu... multifaceted limiting. Do you have a specific use case?
    Rate limiting to ensure quality of service. I want to set things up
    so that a health check can be used to determine if more servers need
    to be spun up and load balanced to.

    Basically I'm planning on the model of something like an F5 load
    balancer or nginx reverse proxy sitting in front of two or more
    instances of the go-based server. Once the go-based servers reach
    some point of saturation I want to prevent them from getting
    overloaded with processing new requests, and I want to detect the fact
    so that we can automate the spinning up of new go-based servers to
    balance to.

    Hopefully that makes sense. :)

    Yep, and if you are checking multiple things (network usage, cpu usage,
    memory usage, disk usage) having a reasonable polling health service is a
    great way to handle it. I think using actual machine stats rather than
    some random "X connections" is a far better and more flexible way to handle
    this, as it will handle different sized hosts, different sized workloads,
    and is nice and disconnected form the specifics.

    In the past, I have had my machines poll themselves and report back to my
    load balance machine(s) their raw stats, the load balancer would weight the
    stats and come up with scores for all the machines. Then I would load them
    according to their weighted scores. This would happen every few seconds,
    once a machine was overloaded it would simply stop getting requests until
    its score came down... regardless of why the score was high (could have
    been network load, or disk swapping or whatever). It wasn't elegant, but
    it was highly effective for a small cluster of about 50 machines. You
    could simply make combined score thresholds for starting new go-based
    servers on the fly, and when the scores dropped low enough for X time, you
    could drain and shutdown go-based servers.

    The code on the servers should be simple. The load balancer would be more
    complex due to scoring, starting up new machines and doing the routing...

    --
    Robert Melton

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • S Ahmed at Jun 15, 2013 at 3:30 pm
    Ah very cool.

    If you use something like haproxy has your load balancer, I guess you would
    have to reconstruct the haproxy configuration file based on a template and
    a list of your backend machines, and then somehow hot-swap the
    configuration file so haproxy reads the updated version.

    On Fri, Jun 14, 2013 at 4:42 PM, Robert Melton wrote:

    On Fri, Jun 14, 2013 at 4:15 PM, James A. Robinson <jim.robinson@gmail.com
    wrote:
    On Fri, Jun 14, 2013 at 1:05 PM, Robert Melton wrote:
    In my experience, the reason for rate limiting tends to have major
    implications on implementation. Limiting for license reasons
    (either on a back-end license, or your own license limiting a
    customer), limiting for DB load, limiting for memory, limiting for
    cpu... multifaceted limiting. Do you have a specific use case?
    Rate limiting to ensure quality of service. I want to set things up
    so that a health check can be used to determine if more servers need
    to be spun up and load balanced to.

    Basically I'm planning on the model of something like an F5 load
    balancer or nginx reverse proxy sitting in front of two or more
    instances of the go-based server. Once the go-based servers reach
    some point of saturation I want to prevent them from getting
    overloaded with processing new requests, and I want to detect the fact
    so that we can automate the spinning up of new go-based servers to
    balance to.

    Hopefully that makes sense. :)

    Yep, and if you are checking multiple things (network usage, cpu usage,
    memory usage, disk usage) having a reasonable polling health service is a
    great way to handle it. I think using actual machine stats rather than
    some random "X connections" is a far better and more flexible way to handle
    this, as it will handle different sized hosts, different sized workloads,
    and is nice and disconnected form the specifics.

    In the past, I have had my machines poll themselves and report back to my
    load balance machine(s) their raw stats, the load balancer would weight the
    stats and come up with scores for all the machines. Then I would load them
    according to their weighted scores. This would happen every few seconds,
    once a machine was overloaded it would simply stop getting requests until
    its score came down... regardless of why the score was high (could have
    been network load, or disk swapping or whatever). It wasn't elegant, but
    it was highly effective for a small cluster of about 50 machines. You
    could simply make combined score thresholds for starting new go-based
    servers on the fly, and when the scores dropped low enough for X time, you
    could drain and shutdown go-based servers.

    The code on the servers should be simple. The load balancer would be more
    complex due to scoring, starting up new machines and doing the routing...

    --
    Robert Melton

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedJun 14, '13 at 7:30p
activeJun 15, '13 at 3:30p
posts5
users3
websitegolang.org

People

Translate

site design / logo © 2022 Grokbase