FAQ
Hi,

I ported a set of routines that read fastq
files(http://en.wikipedia.org/wiki/FASTQ_format) to golang.
The code is here: https://gist.github.com/3882029

The idea is iterating over the input lines until you find a record that can
be returned to the user.
The main routine (readFq()) returns a closure. To get records, the user has
to keep calling the closure
until no more records are available.

I did some basic benchmarking on different implementations(c, lua, python
and perl) of the algorithm
with the following results:

0.03u 3.97s 41.34r 2176kB c
0.06u 8.69s 109.50r 2176kB go
0.03u 4.93s 131.96r 2192kB luajit
0.02u 2.97s 132.41r 2176kB python27
0.07u 9.89s 275.16r 2192kB perl

As you can see the c version is the fastest (41.3sec).

Then I did some profiling that gave me the following results (top10 using
pprof tool):

579 96.5% 96.5% 579 96.5% runtime.nanotime
11 1.8% 98.3% 11 1.8% runtime.sigprocmask
9 1.5% 99.8% 9 1.5% scanblock
1 0.2% 100.0% 1 0.2% ReleaseN
0 0.0% 100.0% 34 5.7% bufio.(*Reader).ReadBytes
0 0.0% 100.0% 11 1.8% bufio.(*Reader).ReadSlice
0 0.0% 100.0% 43 7.2% bufio.(*Reader).ReadString
0 0.0% 100.0% 11 1.8% bufio.(*Reader).fill
0 0.0% 100.0% 557 92.8% concatstring
0 0.0% 100.0% 566 94.3% gostringsize


This and the profiling graph tells me that most of the cpu is spent in
doing garbage collection for the
concatstring and gostringsize routines. Am I right?

In the readFq() routine, there are plenty of len() and substring selections
so the profiling results
are not surprising.

Do you see any obvious changes that can be made in the code to improve the
performance?
Any comments regarding the code are welcome.

Thanks,
-drd


--

Search Discussions

Discussion Posts

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 1 of 10 | next ›
Discussion Overview
groupgolang-nuts @
categoriesgo
postedOct 13, '12 at 5:02a
activeOct 13, '12 at 10:47p
posts10
users4
websitegolang.org

People

Translate

site design / logo © 2023 Grokbase