FAQ
Hi,

I wrote a following program:
https://github.com/mkozjak/gorecord/blob/cpu_profile/gorecord.go

It is used to read data from a udp socket and write it to a file.
According to the attached svg, there seems to be a high utilization of
syscall.Syscall which seems to lead to a high cpu usage when listening from
multicast.

The test was done on Ubuntu 14.04.1 LTS, Trusty Tahr on the host and via
docker.
sysctl.conf:
net.core.rmem_max=12582912
net.core.rmem_default=12582912

For one multicast socket read and file write, it utilizes around 40%
on Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz (6 cores).
A linked program is using "github.com/davecheney/profile" to profile cpu
usage. I got similar results when not using the profiler, btw.
A profiling file is also attached to this email.
(bear in mind I'm using different stream sources/mcast addresses)

Host:

a) one stream:

avg cpu usage: 58.38 (pidstat)
(pidstat 3 -p 63890 - Linux 3.16.0-33-generic (vod1) 04/29/2015 _x86_64_ (24
CPU))

b) two streams:

avg cpu usage: 72.30 (pidstat)

c) three streams:

avg cpu usage: 80.67 (pidstat)


I have also tested the whole thing on kvm (Intel Xeon E312xx (Sandy
Bridge)) without runtime.GOMAXPROCS(2):
cat /proc/cpuinfo | grep processor | wc -l
1

a) one stream:

cat /proc/18113/status | grep ctxt
voluntary_ctxt_switches: 1286
nonvoluntary_ctxt_switches: 2902

avg cpu usage: 1.14 (pidstat)


b) two streams:

cat /proc/18113/status | grep ctxt
voluntary_ctxt_switches: 6233
nonvoluntary_ctxt_switches: 14674

avg cpu usage: 7.14 (pidstat)


c) three streams:

cat /proc/18113/status | grep ctxt
voluntary_ctxt_switches: 19426
nonvoluntary_ctxt_switches: 49421

avg cpu usage: 12.81 (pidstat)


Context switches were taken at similar intervals (two minutes after adding
a new stream recording).


Does anyone have any ideas why a high cpu usage is seen here? Are there any
bad parts in the gorecord code which would introduce such loads?
Bear in mind the rpc methods from the code aren't used when testing.


Thank you in advance!

With regards,
Mario Kozjak

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

  • Mario Kozjak at May 4, 2015 at 7:30 am
    Any ideas on this, maybe?
    On Wednesday, April 29, 2015 at 3:46:19 PM UTC+2, Mario Kozjak wrote:

    Hi,

    I wrote a following program:
    https://github.com/mkozjak/gorecord/blob/cpu_profile/gorecord.go

    It is used to read data from a udp socket and write it to a file.
    According to the attached svg, there seems to be a high utilization of
    syscall.Syscall which seems to lead to a high cpu usage when listening from
    multicast.

    The test was done on Ubuntu 14.04.1 LTS, Trusty Tahr on the host and via
    docker.
    sysctl.conf:
    net.core.rmem_max=12582912
    net.core.rmem_default=12582912

    For one multicast socket read and file write, it utilizes around 40%
    on Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz (6 cores).
    A linked program is using "github.com/davecheney/profile" to profile cpu
    usage. I got similar results when not using the profiler, btw.
    A profiling file is also attached to this email.
    (bear in mind I'm using different stream sources/mcast addresses)

    Host:

    a) one stream:

    avg cpu usage: 58.38 (pidstat)
    (pidstat 3 -p 63890 - Linux 3.16.0-33-generic (vod1) 04/29/2015 _x86_64_ (24
    CPU))

    b) two streams:

    avg cpu usage: 72.30 (pidstat)

    c) three streams:

    avg cpu usage: 80.67 (pidstat)


    I have also tested the whole thing on kvm (Intel Xeon E312xx (Sandy
    Bridge)) without runtime.GOMAXPROCS(2):
    cat /proc/cpuinfo | grep processor | wc -l
    1

    a) one stream:

    cat /proc/18113/status | grep ctxt
    voluntary_ctxt_switches: 1286
    nonvoluntary_ctxt_switches: 2902

    avg cpu usage: 1.14 (pidstat)


    b) two streams:

    cat /proc/18113/status | grep ctxt
    voluntary_ctxt_switches: 6233
    nonvoluntary_ctxt_switches: 14674

    avg cpu usage: 7.14 (pidstat)


    c) three streams:

    cat /proc/18113/status | grep ctxt
    voluntary_ctxt_switches: 19426
    nonvoluntary_ctxt_switches: 49421

    avg cpu usage: 12.81 (pidstat)


    Context switches were taken at similar intervals (two minutes after adding
    a new stream recording).


    Does anyone have any ideas why a high cpu usage is seen here? Are there
    any bad parts in the gorecord code which would introduce such loads?
    Bear in mind the rpc methods from the code aren't used when testing.


    Thank you in advance!

    With regards,
    Mario Kozjak
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Dave Cheney at May 4, 2015 at 8:32 am
    I had a quick look at the code and my suggestion is to focus on the error
    paths. In most cases when your code hits an error, it immediately jumps
    back to the top of the loop and retries the operation, this is basically
    spinning and probably accounts for the high cpu usage.

    Also, in many cases where you are calling a Read method you are discarding
    the number of bytes returned. You should no do this as you cannot assume
    that Read will return len(buf) bytes, so you must reslice the buffer after
    the Read call with the number of bytes actually read.
    On Monday, 4 May 2015 17:30:42 UTC+10, Mario Kozjak wrote:

    Any ideas on this, maybe?
    On Wednesday, April 29, 2015 at 3:46:19 PM UTC+2, Mario Kozjak wrote:

    Hi,

    I wrote a following program:
    https://github.com/mkozjak/gorecord/blob/cpu_profile/gorecord.go

    It is used to read data from a udp socket and write it to a file.
    According to the attached svg, there seems to be a high utilization of
    syscall.Syscall which seems to lead to a high cpu usage when listening from
    multicast.

    The test was done on Ubuntu 14.04.1 LTS, Trusty Tahr on the host and via
    docker.
    sysctl.conf:
    net.core.rmem_max=12582912
    net.core.rmem_default=12582912

    For one multicast socket read and file write, it utilizes around 40%
    on Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz (6 cores).
    A linked program is using "github.com/davecheney/profile" to profile cpu
    usage. I got similar results when not using the profiler, btw.
    A profiling file is also attached to this email.
    (bear in mind I'm using different stream sources/mcast addresses)

    Host:

    a) one stream:

    avg cpu usage: 58.38 (pidstat)
    (pidstat 3 -p 63890 - Linux 3.16.0-33-generic (vod1) 04/29/2015 _x86_64_ (24
    CPU))

    b) two streams:

    avg cpu usage: 72.30 (pidstat)

    c) three streams:

    avg cpu usage: 80.67 (pidstat)


    I have also tested the whole thing on kvm (Intel Xeon E312xx (Sandy
    Bridge)) without runtime.GOMAXPROCS(2):
    cat /proc/cpuinfo | grep processor | wc -l
    1

    a) one stream:

    cat /proc/18113/status | grep ctxt
    voluntary_ctxt_switches: 1286
    nonvoluntary_ctxt_switches: 2902

    avg cpu usage: 1.14 (pidstat)


    b) two streams:

    cat /proc/18113/status | grep ctxt
    voluntary_ctxt_switches: 6233
    nonvoluntary_ctxt_switches: 14674

    avg cpu usage: 7.14 (pidstat)


    c) three streams:

    cat /proc/18113/status | grep ctxt
    voluntary_ctxt_switches: 19426
    nonvoluntary_ctxt_switches: 49421

    avg cpu usage: 12.81 (pidstat)


    Context switches were taken at similar intervals (two minutes after
    adding a new stream recording).


    Does anyone have any ideas why a high cpu usage is seen here? Are there
    any bad parts in the gorecord code which would introduce such loads?
    Bear in mind the rpc methods from the code aren't used when testing.


    Thank you in advance!

    With regards,
    Mario Kozjak
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Mario Kozjak at May 4, 2015 at 9:23 am
    Hello, Dave!

    First of all, thanks for taking a peek at the code!

    For the errors you've pointed out, I believe you're talking about the line
    846 (https://github.com/mkozjak/gorecord/blob/cpu_profile/gorecord.go#L846).
    I just did another test printing out the error if it occurs. Not a single
    one came around for a few minutes of testing (with one or two different
    multicast streams, in example).
    Which means 'continue' is never triggered.

    The cpu was at 5.05% on average for one stream (read from udp socket and
    write to file) and 10.52% for two.
    The test was done on a kvm machine.

    Regarding zero bytes returned from the socket.. I have also tested this
    right now and I never get 0 bytes.
    The multicast is definitely arriving. Each packet is 1328 bytes large. So
    'pkt' is always full. So that should not be the case of a big cpu usage
    either.

    Do you have any other thoughts on this, maybe?

    With regards,
    Mario Kozjak


    On Monday, May 4, 2015 at 10:32:31 AM UTC+2, Dave Cheney wrote:

    I had a quick look at the code and my suggestion is to focus on the error
    paths. In most cases when your code hits an error, it immediately jumps
    back to the top of the loop and retries the operation, this is basically
    spinning and probably accounts for the high cpu usage.

    Also, in many cases where you are calling a Read method you are discarding
    the number of bytes returned. You should no do this as you cannot assume
    that Read will return len(buf) bytes, so you must reslice the buffer after
    the Read call with the number of bytes actually read.
    On Monday, 4 May 2015 17:30:42 UTC+10, Mario Kozjak wrote:

    Any ideas on this, maybe?
    On Wednesday, April 29, 2015 at 3:46:19 PM UTC+2, Mario Kozjak wrote:

    Hi,

    I wrote a following program:
    https://github.com/mkozjak/gorecord/blob/cpu_profile/gorecord.go

    It is used to read data from a udp socket and write it to a file.
    According to the attached svg, there seems to be a high utilization of
    syscall.Syscall which seems to lead to a high cpu usage when listening from
    multicast.

    The test was done on Ubuntu 14.04.1 LTS, Trusty Tahr on the host and via
    docker.
    sysctl.conf:
    net.core.rmem_max=12582912
    net.core.rmem_default=12582912

    For one multicast socket read and file write, it utilizes around 40%
    on Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz (6 cores).
    A linked program is using "github.com/davecheney/profile" to profile
    cpu usage. I got similar results when not using the profiler, btw.
    A profiling file is also attached to this email.
    (bear in mind I'm using different stream sources/mcast addresses)

    Host:

    a) one stream:

    avg cpu usage: 58.38 (pidstat)
    (pidstat 3 -p 63890 - Linux 3.16.0-33-generic (vod1) 04/29/2015 _x86_64_ (24
    CPU))

    b) two streams:

    avg cpu usage: 72.30 (pidstat)

    c) three streams:

    avg cpu usage: 80.67 (pidstat)


    I have also tested the whole thing on kvm (Intel Xeon E312xx (Sandy
    Bridge)) without runtime.GOMAXPROCS(2):
    cat /proc/cpuinfo | grep processor | wc -l
    1

    a) one stream:

    cat /proc/18113/status | grep ctxt
    voluntary_ctxt_switches: 1286
    nonvoluntary_ctxt_switches: 2902

    avg cpu usage: 1.14 (pidstat)


    b) two streams:

    cat /proc/18113/status | grep ctxt
    voluntary_ctxt_switches: 6233
    nonvoluntary_ctxt_switches: 14674

    avg cpu usage: 7.14 (pidstat)


    c) three streams:

    cat /proc/18113/status | grep ctxt
    voluntary_ctxt_switches: 19426
    nonvoluntary_ctxt_switches: 49421

    avg cpu usage: 12.81 (pidstat)


    Context switches were taken at similar intervals (two minutes after
    adding a new stream recording).


    Does anyone have any ideas why a high cpu usage is seen here? Are there
    any bad parts in the gorecord code which would introduce such loads?
    Bear in mind the rpc methods from the code aren't used when testing.


    Thank you in advance!

    With regards,
    Mario Kozjak
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • James Bardin at May 4, 2015 at 2:05 pm

    On Monday, May 4, 2015 at 5:23:53 AM UTC-4, Mario Kozjak wrote:

    Regarding zero bytes returned from the socket.. I have also tested this
    right now and I never get 0 bytes.
    The multicast is definitely arriving. Each packet is 1328 bytes large. So
    'pkt' is always full. So that should not be the case of a big cpu usage
    either.
    Still no reason to not do it correctly. You could be sending null bytes to
    something that doesn't expect it at some point, or being UDP, you could get
    a fragmented packet with one part lost, junk packets hitting the socket, a
    broken packet from the source, etc. But yes, shouldn't affect the CPU usage
    unless it's causing errors further down the line.

    From the source, if you're receiving packets at any appreciable speed,
    you're going to be thrashing the allocator and GC. You create a new slice
    for every single packet of 1328 bytes. While it looks nice and clean to
    have a channel that takes slices to send to you writer goroutine, you can't
    get any appreciable throughput this way, there's just too much garbage
    generated. You need to reuse your buffers.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • James Bardin at May 4, 2015 at 2:07 pm


    From the source, if you're receiving packets at any appreciable speed,
    you're going to be thrashing the allocator and GC. You create a new slice
    for every single packet of 1328 bytes. While it looks nice and clean to
    have a channel that takes slices to send to you writer goroutine, you can't
    get any appreciable throughput this way, there's just too much garbage
    generated. You need to reuse your buffers.

    Oh, and fas or the profile, the pprof isn't much use without the binary
    that created it, but I would lighten the load on the GC first.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Mario Kozjak at May 4, 2015 at 3:07 pm
    Hi, James

    Reusing the buffer didn't lower the cpu usage at all. For one channel it
    was about 10% on kvm, and 40% at the real server (!!!!!!!) (both same
    servers like in the first post are used).


    Regards,
    mk
    On Monday, May 4, 2015 at 4:07:37 PM UTC+2, James Bardin wrote:

    From the source, if you're receiving packets at any appreciable speed,
    you're going to be thrashing the allocator and GC. You create a new slice
    for every single packet of 1328 bytes. While it looks nice and clean to
    have a channel that takes slices to send to you writer goroutine, you can't
    get any appreciable throughput this way, there's just too much garbage
    generated. You need to reuse your buffers.

    Oh, and fas or the profile, the pprof isn't much use without the binary
    that created it, but I would lighten the load on the GC first.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Mario Kozjak at May 4, 2015 at 3:53 pm
    Also, here is an example application written in Qt, using maximum of 2.5%
    cpu per channel (multicast listen and write to file) on both kvm and host:

    https://github.com/pkoretic/StreamIngester/blob/master/src/ingest.cpp
    On Monday, May 4, 2015 at 5:07:15 PM UTC+2, Mario Kozjak wrote:

    Hi, James

    Reusing the buffer didn't lower the cpu usage at all. For one channel it
    was about 10% on kvm, and 40% at the real server (!!!!!!!) (both same
    servers like in the first post are used).


    Regards,
    mk
    On Monday, May 4, 2015 at 4:07:37 PM UTC+2, James Bardin wrote:

    From the source, if you're receiving packets at any appreciable speed,
    you're going to be thrashing the allocator and GC. You create a new slice
    for every single packet of 1328 bytes. While it looks nice and clean to
    have a channel that takes slices to send to you writer goroutine, you can't
    get any appreciable throughput this way, there's just too much garbage
    generated. You need to reuse your buffers.

    Oh, and fas or the profile, the pprof isn't much use without the binary
    that created it, but I would lighten the load on the GC first.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Mario Kozjak at May 10, 2015 at 2:36 pm
    Are there any other ideas? Is it my code that's not correct that would
    produce such cpu consumptions
    (except the stuff already pointed out which don't really augment this
    problem)?

    I've already tried the go git version and the problem persists, too.

    Regards,
    mk
    On Monday, May 4, 2015 at 5:53:23 PM UTC+2, Mario Kozjak wrote:

    Also, here is an example application written in Qt, using maximum of 2.5%
    cpu per channel (multicast listen and write to file) on both kvm and host:

    https://github.com/pkoretic/StreamIngester/blob/master/src/ingest.cpp
    On Monday, May 4, 2015 at 5:07:15 PM UTC+2, Mario Kozjak wrote:

    Hi, James

    Reusing the buffer didn't lower the cpu usage at all. For one channel it
    was about 10% on kvm, and 40% at the real server (!!!!!!!) (both same
    servers like in the first post are used).


    Regards,
    mk
    On Monday, May 4, 2015 at 4:07:37 PM UTC+2, James Bardin wrote:

    From the source, if you're receiving packets at any appreciable speed,
    you're going to be thrashing the allocator and GC. You create a new slice
    for every single packet of 1328 bytes. While it looks nice and clean to
    have a channel that takes slices to send to you writer goroutine, you can't
    get any appreciable throughput this way, there's just too much garbage
    generated. You need to reuse your buffers.

    Oh, and fas or the profile, the pprof isn't much use without the binary
    that created it, but I would lighten the load on the GC first.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Zlatko Calusic at Jun 24, 2015 at 9:41 pm

    On 04.05.2015 11:23, Mario Kozjak wrote:
    Hello, Dave!

    First of all, thanks for taking a peek at the code!

    For the errors you've pointed out, I believe you're talking about the
    line 846
    (https://github.com/mkozjak/gorecord/blob/cpu_profile/gorecord.go#L846).
    I just did another test printing out the error if it occurs. Not a
    single one came around for a few minutes of testing (with one or two
    different multicast streams, in example).
    Which means 'continue' is never triggered.

    The cpu was at 5.05% on average for one stream (read from udp socket and
    write to file) and 10.52% for two.
    The test was done on a kvm machine.

    Regarding zero bytes returned from the socket.. I have also tested this
    right now and I never get 0 bytes.
    The multicast is definitely arriving. Each packet is 1328 bytes large.
    So 'pkt' is always full. So that should not be the case of a big cpu
    usage either.

    Do you have any other thoughts on this, maybe?
    Hello Mario,

    I see you are using obsolete and not maintained anymore
    code.google.com/p/go.net repo.

    Why not switch to golang.org/x/net and see if you get any improvements
    with newer code? Additionaly, the switch will enable you to compile with
    soon to be released Go 1.5 and see if that provides any more improvements.

    Hope it helps.

    Regards,

    --
    Zlatko

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Mario Kozjak at Jul 21, 2015 at 2:18 pm
    Hello,

    Seems like this line is somewhat problematic with our usecase where we have
    multiple multicasts that are to be bound at the same port,
    in example 232.235.2.40:1234, 232.235.2.32:1234.

    https://github.com/golang/go/blob/master/src/net/sock_posix.go#L194

    By removing the syscall.AF_INET case and rebuilding go, we got drastic
    changes with performance.
    On Wed, Jun 24, 2015 at 10:29 PM, Zlatko Calusic wrote:
    On 04.05.2015 11:23, Mario Kozjak wrote:

    Hello, Dave!

    First of all, thanks for taking a peek at the code!

    For the errors you've pointed out, I believe you're talking about the
    line 846
    (https://github.com/mkozjak/gorecord/blob/cpu_profile/gorecord.go#L846).
    I just did another test printing out the error if it occurs. Not a
    single one came around for a few minutes of testing (with one or two
    different multicast streams, in example).
    Which means 'continue' is never triggered.

    The cpu was at 5.05% on average for one stream (read from udp socket and
    write to file) and 10.52% for two.
    The test was done on a kvm machine.

    Regarding zero bytes returned from the socket.. I have also tested this
    right now and I never get 0 bytes.
    The multicast is definitely arriving. Each packet is 1328 bytes large.
    So 'pkt' is always full. So that should not be the case of a big cpu
    usage either.

    Do you have any other thoughts on this, maybe?
    Hello Mario,

    I see you are using obsolete and not maintained anymore
    code.google.com/p/go.net repo.

    Why not switch to golang.org/x/net and see if you get any improvements
    with newer code? Additionaly, the switch will enable you to compile with
    soon to be released Go 1.5 and see if that provides any more improvements.

    Hope it helps.

    Regards,

    --
    Zlatko

    --
    Mario Kozjak

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Mario Kozjak at Mar 18, 2016 at 1:01 am
    Hello. We are having major improvements considering this issue with go
    version 1.6.

    Anything changed in this area and what was a potential problem?
    On Jul 21, 2015 16:18, "Mario Kozjak" wrote:

    Hello,

    Seems like this line is somewhat problematic with our usecase where we
    have multiple multicasts that are to be bound at the same port,
    in example 232.235.2.40:1234, 232.235.2.32:1234.

    https://github.com/golang/go/blob/master/src/net/sock_posix.go#L194

    By removing the syscall.AF_INET case and rebuilding go, we got drastic
    changes with performance.
    On Wed, Jun 24, 2015 at 10:29 PM, Zlatko Calusic wrote:
    On 04.05.2015 11:23, Mario Kozjak wrote:

    Hello, Dave!

    First of all, thanks for taking a peek at the code!

    For the errors you've pointed out, I believe you're talking about the
    line 846
    (https://github.com/mkozjak/gorecord/blob/cpu_profile/gorecord.go#L846).
    I just did another test printing out the error if it occurs. Not a
    single one came around for a few minutes of testing (with one or two
    different multicast streams, in example).
    Which means 'continue' is never triggered.

    The cpu was at 5.05% on average for one stream (read from udp socket and
    write to file) and 10.52% for two.
    The test was done on a kvm machine.

    Regarding zero bytes returned from the socket.. I have also tested this
    right now and I never get 0 bytes.
    The multicast is definitely arriving. Each packet is 1328 bytes large.
    So 'pkt' is always full. So that should not be the case of a big cpu
    usage either.

    Do you have any other thoughts on this, maybe?
    Hello Mario,

    I see you are using obsolete and not maintained anymore
    code.google.com/p/go.net repo.

    Why not switch to golang.org/x/net and see if you get any improvements
    with newer code? Additionaly, the switch will enable you to compile with
    soon to be released Go 1.5 and see if that provides any more improvements.

    Hope it helps.

    Regards,

    --
    Zlatko

    --
    Mario Kozjak
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedApr 29, '15 at 1:46p
activeMar 18, '16 at 1:01a
posts12
users4
websitegolang.org

People

Translate

site design / logo © 2022 Grokbase