nodejs cluster. I am trying to get a handle on what the performance
characteristics of nodejs should be. I am running the latest stable
node, 0.6.13 on Ubuntu 11.10. I am running tests on Amazon EC2 with
an "m1.small" and a "c1.medium".
Based on say a 1GHz (m1.small) or 2.5GHz (c1.medium) processor how may
concurrent connections should a single nodejs process be able to
handle? I start to see strange performance issues when an instance
has around 500 concurrent HTTP connections. Instead of performance
degrading slowly, there are these strange 1-2 second gaps in
performance where something seems to be waiting.
For this particular test, I used node's http library for the client
load test machine on a "c1.medium", and the nodejs http library again
for the server on a different Amazon "c1.medium". I started 1000
concurrent requests simultaneously. Here is a portion of my results
that illustrate the issue:
Request Number Response Time (ms) Time spent on the Server
696 818 0
697 818 0
698 819 1
699 3354 1
700 3356 0
701 3358 0
So here at request number 699, you see the response time jump by about
2500ms. The "response time" is what the load tester reads and the
"time spent on server" is what the server under test reports as its
time spent on the request. Obviously, I realize that if the CPU is
busy requests will queue up, but is there any way to get insight into
this? Is this a problem with TCP tuning on my linux instance? I have
max connections for the instance set to 20,000.
Basically, I am trying to figure out how many concurrent connections
and/or requests/second a single node process should be able to handle
when it is just responding to almost empty HTTP requests.
Job Board: http://jobs.nodejs.org/
Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to email@example.com
To unsubscribe from this group, send email to
For more options, visit this group at