number of concurrent requests (think about the kind of load we may be
generating for a server if we'd be scraping real sites). My solution to
this was a generic worker pool: https://github.com/stefantalpalaru/pool(licensed under MPL-2).
The simplified example<https://github.com/stefantalpalaru/pool/blob/master/examples/pool_example.go>shows two usage patterns: the classic "add all the jobs and wait until
they're all done" you get with multiprocessing.Pool in Python and a more
flexible "add the jobs whenever you want and get the results whenever
they're available". The latter is used for web_crawler.go<https://github.com/stefantalpalaru/pool/blob/master/examples/web_crawler.go>.
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.