|| at Apr 11, 2012 at 9:35 am
On Apr 11, 2012, at 9:40 AM, Jeremy Rudd wrote: On 4/10/2012 6:09 PM, Jorge wrote:
That's exactly what threads_a_gogo is for:
threads_a_gogo sounds cool, but what are the technical details of using it?
1. What does it use in the back-end to schedule / manage tasks? child_process?
It creates p-threads and runs v8 isolates in them.
2. child_process takes some 30 ms to spawn a child thread. How much time does GoGo take?
When I run this:
$ time node -e "require('threads_a_gogo').createPool(100).destroy()"
That's 3.87 ms per thread
But note that these 0.387s include as well the time to start node, the time to require('threads_a_gogo'), and the time to destroy the 100 threads, so it must be a bit less than 3.87 ms per thread.
3. How does it internally work? Are there constantly running threads + scheduler? Or does it spawn a new thread per task?
You could create/destroy the threads on demand, but I would rather create a thread pool of twice or 3x the number of cores and install in them only once the functions that do the calculations, then reuse them again and again via a pool.any.eval(program, cb). Each thread takes only ~2MB, and when idle it uses exactly 0% cpu.
4. Is it recommended for 1-10 ms tasks as well? Do you have any idea / stats to show performance with / without GoGo?
Threads_a_gogo lets you
-run these blocking calls in parallel so
-they won't block node's main thread
-you'll be exploiting all the cpu cores in the machine
With an n cores machine, you'll be serving results n times faster, but even in a single core machine, you should use threads_a_gogo simply to avoid blocking node.
WRT to performance, a fibonacci(40) in a thread_a_gogo runs about twice as fast as in node's main thread, see: <https://gist.github.com/2018811
> it's also a good example because it's very similar to your use case, istm.