Today as an experiment I started a new image processing library (for now
only resizing using various filters is supported). The goal is to use all
CPU cores to speed things up.
I divide the image to N parts (where N is runtime.NumCPU()) and process
each part in a separate goroutine. As the result, i got ~2-2.5x speedup on
a laptop with 4 CPU cores, ~1.5x speedup with 2CPU cores.
The only thing library user should do is to enable all CPU cores usage by
But now I wonder, what is considered the right approach for a package? To
use goroutines for faster calculations, or to leave all parallelization to
the end user?
(I am sorry, english is not my native language)
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
For more options, visit https://groups.google.com/groups/opt_out.