FAQ
I recently rewrote my open source database in Go and it's been a great
experience. I ran into a profiling issue that I couldn't find anything on
Google about. I'm importing data from the GitHub Archive into my database
and while running profiling it's showing that 97.9% of the time is spent in
runtime.mach_semaphore_timedwait:

blackdog2:sky benbjohnson$ go tool pprof
http://localhost:8585/debug/pprof/profile
Read http://localhost:8585/debug/pprof/symbol
Gathering CPU profile from
http://localhost:8585/debug/pprof/profile?seconds=30 for 30 seconds to
   /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
Be patient...
Wrote profile to
/Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
Welcome to pprof! For help, type 'help'.
(pprof) top10
Total: 1504 samples
     1473 97.9% 97.9% 1473 *97.9% runtime.mach_semaphore_timedwait*
       18 1.2% 99.1% 18 1.2% runtime.sigprocmask
       13 0.9% 100.0% 13 0.9% runtime.mach_semaphore_signal
        0 0.0% 100.0% 31 2.1% net.(*pollServer).Run
        0 0.0% 100.0% 13 0.9% net.(*pollServer).WakeFD
        0 0.0% 100.0% 18 1.2% net.(*pollster).WaitFD
        0 0.0% 100.0% 1473 97.9% runtime.MHeap_Scavenger
        0 0.0% 100.0% 13 0.9% runtime.chansend
        0 0.0% 100.0% 13 0.9% runtime.chansend1
        0 0.0% 100.0% 18 1.2% runtime.exitsyscall

The database demultiplexes the Go HTTP router through a single channel to
grab a reference to the table and then it sends the actual processing of
the insert of the data to a separate event loop inside a servlet.

Project code: https://github.com/skydb/sky/tree/go
HTTP handler for
insert: https://github.com/skydb/sky/blob/go/skyd/server_event_handlers.go#L102

The server is running at about 110% CPU (on my dual-core MacBook Pro) and
the importer is inserting one item at a time. I'm getting throughput of
about 400 items/second which seems low.

What is causing the runtime.mach_semaphore_timedwait to occur? I was
thinking that it could just be idle waiting for the importer in between
inserts but the CPU usage seems really high for that.


Ben Johnson
http://skydb.io

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Search Discussions

  • Andy Balholm at Mar 29, 2013 at 6:36 pm
    As you stated
    in http://skydb.io/blog/introduction-to-behavioral-databases.html:
    Transactions & Locks = Slow
    The channel contains a lock. By pushing every request through the channel,
    you're creating tremendous contention on the lock. Is there any way to
    eliminate that bottleneck?

    I haven't really looked at your code; I'm just thinking about what can
    happen when you have too many goroutines trying to read or write the same
    channel. If you could eliminate the bottleneck, your code would probably
    run about 50 times faster (since it's wasting 98% of its time in lock
    contention—at least I suspect that is where mach_semaphore_timedwait is
    being called). It would probably also be able to take advantage of both CPU
    cores.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Ben at Mar 29, 2013 at 7:37 pm
    Andy-

    I tried pulling out the server level channel and it only slightly improved
    performance. I'll play around with it some more and see if I can figure it
    out. The locks are unfortunately necessary on insert. The query speed is
    really where I'm trying to optimize and that effectively uses no locks and
    is blazingly fast (10MM records/sec/core). I'm just hoping to bump up my
    insert performance. Something doesn't seem quite right with it.


    Ben

    On Friday, March 29, 2013 12:36:34 PM UTC-6, Andy Balholm wrote:

    As you stated in
    http://skydb.io/blog/introduction-to-behavioral-databases.html:
    Transactions & Locks = Slow
    The channel contains a lock. By pushing every request through the channel,
    you're creating tremendous contention on the lock. Is there any way to
    eliminate that bottleneck?

    I haven't really looked at your code; I'm just thinking about what can
    happen when you have too many goroutines trying to read or write the same
    channel. If you could eliminate the bottleneck, your code would probably
    run about 50 times faster (since it's wasting 98% of its time in lock
    contention—at least I suspect that is where mach_semaphore_timedwait is
    being called). It would probably also be able to take advantage of both CPU
    cores.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Ingo Oeser at Mar 30, 2013 at 1:51 am

    On Friday, March 29, 2013 8:37:40 PM UTC+1, b...@skylandlabs.com wrote:

    Andy-

    I tried pulling out the server level channel and it only slightly improved
    performance. I'll play around with it some more and see if I can figure it
    out. The locks are unfortunately necessary on insert. The query speed is
    really where I'm trying to optimize and that effectively uses no locks and
    is blazingly fast (10MM records/sec/core). I'm just hoping to bump up my
    insert performance. Something doesn't seem quite right with it.
    AFAICS there are two levels of channels:
    * per servlet
    * per server

    Since both channels are unbuffered and you just call async and wait
    afterwards, couldn't you just skip the Servlet channel processing loop and
    execute the function directly?
    Did you try using buffered channels to get better batching?

    Order of operations isn't guaranteed with the current model anyway. Only
    that exactly one message per leveldb is executed at any time. Is that the
    goal?
    If it is, you might try a sync.Mutex instead of a unbuffered channel.

    PS: I like the project very much. Would be very useful at work.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Carlos Castillo at Mar 29, 2013 at 9:31 pm
    Have you looked at the result of the web command in pprof? It shows a
    graphical representation of the profile, and you can see which code is
    calling mach_semaphore_timedwait (and with what frequency). Sometimes I see
    the problem better with this view of the data.

    Also, are you using go-1.0.3 or tip (soon to be go-1.1)? Tip includes block
    profiling, where you can find out how long and where your code is blocking.
    On Friday, March 29, 2013 9:46:53 AM UTC-7, b...@skylandlabs.com wrote:

    I recently rewrote my open source database in Go and it's been a great
    experience. I ran into a profiling issue that I couldn't find anything on
    Google about. I'm importing data from the GitHub Archive into my database
    and while running profiling it's showing that 97.9% of the time is spent in
    runtime.mach_semaphore_timedwait:

    blackdog2:sky benbjohnson$ go tool pprof
    http://localhost:8585/debug/pprof/profile
    Read http://localhost:8585/debug/pprof/symbol
    Gathering CPU profile from
    http://localhost:8585/debug/pprof/profile?seconds=30 for 30 seconds to
    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
    Be patient...
    Wrote profile to
    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
    Welcome to pprof! For help, type 'help'.
    (pprof) top10
    Total: 1504 samples
    1473 97.9% 97.9% 1473 *97.9% runtime.mach_semaphore_timedwait*
    18 1.2% 99.1% 18 1.2% runtime.sigprocmask
    13 0.9% 100.0% 13 0.9% runtime.mach_semaphore_signal
    0 0.0% 100.0% 31 2.1% net.(*pollServer).Run
    0 0.0% 100.0% 13 0.9% net.(*pollServer).WakeFD
    0 0.0% 100.0% 18 1.2% net.(*pollster).WaitFD
    0 0.0% 100.0% 1473 97.9% runtime.MHeap_Scavenger
    0 0.0% 100.0% 13 0.9% runtime.chansend
    0 0.0% 100.0% 13 0.9% runtime.chansend1
    0 0.0% 100.0% 18 1.2% runtime.exitsyscall

    The database demultiplexes the Go HTTP router through a single channel to
    grab a reference to the table and then it sends the actual processing of
    the insert of the data to a separate event loop inside a servlet.

    Project code: https://github.com/skydb/sky/tree/go
    HTTP handler for insert:
    https://github.com/skydb/sky/blob/go/skyd/server_event_handlers.go#L102

    The server is running at about 110% CPU (on my dual-core MacBook Pro) and
    the importer is inserting one item at a time. I'm getting throughput of
    about 400 items/second which seems low.

    What is causing the runtime.mach_semaphore_timedwait to occur? I was
    thinking that it could just be idle waiting for the importer in between
    inserts but the CPU usage seems really high for that.


    Ben Johnson
    http://skydb.io
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Ben Johnson at Mar 29, 2013 at 9:57 pm
    Thanks, Carlos. I attached the svg output. This one doesn't actually show any of my code though. I assume it's because all the samples that were taken were done in the runtime. The svg output before I removed the server level channel did show some of my code but I don't have a copy any longer. It seems to all be coming from runtime.MHeap_Scavenger. Would the GC waiting on something?

    Ben
    On Mar 29, 2013, at 3:31 PM, Carlos Castillo wrote:

    Have you looked at the result of the web command in pprof? It shows a graphical representation of the profile, and you can see which code is calling mach_semaphore_timedwait (and with what frequency). Sometimes I see the problem better with this view of the data.

    Also, are you using go-1.0.3 or tip (soon to be go-1.1)? Tip includes block profiling, where you can find out how long and where your code is blocking.

    On Friday, March 29, 2013 9:46:53 AM UTC-7, b...@skylandlabs.com wrote:
    I recently rewrote my open source database in Go and it's been a great experience. I ran into a profiling issue that I couldn't find anything on Google about. I'm importing data from the GitHub Archive into my database and while running profiling it's showing that 97.9% of the time is spent in runtime.mach_semaphore_timedwait:

    blackdog2:sky benbjohnson$ go tool pprof http://localhost:8585/debug/pprof/profile
    Read http://localhost:8585/debug/pprof/symbol
    Gathering CPU profile from http://localhost:8585/debug/pprof/profile?seconds=30 for 30 seconds to
    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
    Be patient...
    Wrote profile to /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
    Welcome to pprof! For help, type 'help'.
    (pprof) top10
    Total: 1504 samples
    1473 97.9% 97.9% 1473 97.9% runtime.mach_semaphore_timedwait
    18 1.2% 99.1% 18 1.2% runtime.sigprocmask
    13 0.9% 100.0% 13 0.9% runtime.mach_semaphore_signal
    0 0.0% 100.0% 31 2.1% net.(*pollServer).Run
    0 0.0% 100.0% 13 0.9% net.(*pollServer).WakeFD
    0 0.0% 100.0% 18 1.2% net.(*pollster).WaitFD
    0 0.0% 100.0% 1473 97.9% runtime.MHeap_Scavenger
    0 0.0% 100.0% 13 0.9% runtime.chansend
    0 0.0% 100.0% 13 0.9% runtime.chansend1
    0 0.0% 100.0% 18 1.2% runtime.exitsyscall

    The database demultiplexes the Go HTTP router through a single channel to grab a reference to the table and then it sends the actual processing of the insert of the data to a separate event loop inside a servlet.

    Project code: https://github.com/skydb/sky/tree/go
    HTTP handler for insert: https://github.com/skydb/sky/blob/go/skyd/server_event_handlers.go#L102

    The server is running at about 110% CPU (on my dual-core MacBook Pro) and the importer is inserting one item at a time. I'm getting throughput of about 400 items/second which seems low.

    What is causing the runtime.mach_semaphore_timedwait to occur? I was thinking that it could just be idle waiting for the importer in between inserts but the CPU usage seems really high for that.


    Ben Johnson
    http://skydb.io

    --
    You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/G1uBDhG-TCE/unsubscribe?hl=en-US.
    To unsubscribe from this group and all its topics, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dave Cheney at Mar 30, 2013 at 12:13 am
    Can you try two things for me.

    1. run your program under GOGCTRACE=1, you don't need to be profiling.
    This may confirm a suspicion that you have a spinning goroutine
    blocking progress of the gc.

    2. profile it on a system other than OS X.

    Cheers

    Dave
    On Sat, Mar 30, 2013 at 8:56 AM, Ben Johnson wrote:
    Thanks, Carlos. I attached the svg output. This one doesn't actually show
    any of my code though. I assume it's because all the samples that were taken
    were done in the runtime. The svg output before I removed the server level
    channel did show some of my code but I don't have a copy any longer. It
    seems to all be coming from runtime.MHeap_Scavenger. Would the GC waiting on
    something?

    Ben

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.




    On Mar 29, 2013, at 3:31 PM, Carlos Castillo wrote:

    Have you looked at the result of the web command in pprof? It shows a
    graphical representation of the profile, and you can see which code is
    calling mach_semaphore_timedwait (and with what frequency). Sometimes I see
    the problem better with this view of the data.

    Also, are you using go-1.0.3 or tip (soon to be go-1.1)? Tip includes block
    profiling, where you can find out how long and where your code is blocking.
    On Friday, March 29, 2013 9:46:53 AM UTC-7, b...@skylandlabs.com wrote:

    I recently rewrote my open source database in Go and it's been a great
    experience. I ran into a profiling issue that I couldn't find anything on
    Google about. I'm importing data from the GitHub Archive into my database
    and while running profiling it's showing that 97.9% of the time is spent in
    runtime.mach_semaphore_timedwait:

    blackdog2:sky benbjohnson$ go tool pprof
    http://localhost:8585/debug/pprof/profile
    Read http://localhost:8585/debug/pprof/symbol
    Gathering CPU profile from
    http://localhost:8585/debug/pprof/profile?seconds=30 for 30 seconds to
    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
    Be patient...
    Wrote profile to
    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
    Welcome to pprof! For help, type 'help'.
    (pprof) top10
    Total: 1504 samples
    1473 97.9% 97.9% 1473 97.9% runtime.mach_semaphore_timedwait
    18 1.2% 99.1% 18 1.2% runtime.sigprocmask
    13 0.9% 100.0% 13 0.9% runtime.mach_semaphore_signal
    0 0.0% 100.0% 31 2.1% net.(*pollServer).Run
    0 0.0% 100.0% 13 0.9% net.(*pollServer).WakeFD
    0 0.0% 100.0% 18 1.2% net.(*pollster).WaitFD
    0 0.0% 100.0% 1473 97.9% runtime.MHeap_Scavenger
    0 0.0% 100.0% 13 0.9% runtime.chansend
    0 0.0% 100.0% 13 0.9% runtime.chansend1
    0 0.0% 100.0% 18 1.2% runtime.exitsyscall

    The database demultiplexes the Go HTTP router through a single channel to
    grab a reference to the table and then it sends the actual processing of the
    insert of the data to a separate event loop inside a servlet.

    Project code: https://github.com/skydb/sky/tree/go
    HTTP handler for insert:
    https://github.com/skydb/sky/blob/go/skyd/server_event_handlers.go#L102

    The server is running at about 110% CPU (on my dual-core MacBook Pro) and
    the importer is inserting one item at a time. I'm getting throughput of
    about 400 items/second which seems low.

    What is causing the runtime.mach_semaphore_timedwait to occur? I was
    thinking that it could just be idle waiting for the importer in between
    inserts but the CPU usage seems really high for that.


    Ben Johnson
    http://skydb.io

    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/G1uBDhG-TCE/unsubscribe?hl=en-US.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.



    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Ben Johnson at Mar 30, 2013 at 12:42 am
    Dave-

    Thanks for the help. I attached the output of running with GOGCTRACE=1. I'll setup a linux box to profile on as well and get back to you with those results.


    Ben

    On Mar 29, 2013, at 6:13 PM, Dave Cheney wrote:

    Can you try two things for me.

    1. run your program under GOGCTRACE=1, you don't need to be profiling.
    This may confirm a suspicion that you have a spinning goroutine
    blocking progress of the gc.

    2. profile it on a system other than OS X.

    Cheers

    Dave
    On Sat, Mar 30, 2013 at 8:56 AM, Ben Johnson wrote:
    Thanks, Carlos. I attached the svg output. This one doesn't actually show
    any of my code though. I assume it's because all the samples that were taken
    were done in the runtime. The svg output before I removed the server level
    channel did show some of my code but I don't have a copy any longer. It
    seems to all be coming from runtime.MHeap_Scavenger. Would the GC waiting on
    something?

    Ben

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.




    On Mar 29, 2013, at 3:31 PM, Carlos Castillo wrote:

    Have you looked at the result of the web command in pprof? It shows a
    graphical representation of the profile, and you can see which code is
    calling mach_semaphore_timedwait (and with what frequency). Sometimes I see
    the problem better with this view of the data.

    Also, are you using go-1.0.3 or tip (soon to be go-1.1)? Tip includes block
    profiling, where you can find out how long and where your code is blocking.
    On Friday, March 29, 2013 9:46:53 AM UTC-7, b...@skylandlabs.com wrote:

    I recently rewrote my open source database in Go and it's been a great
    experience. I ran into a profiling issue that I couldn't find anything on
    Google about. I'm importing data from the GitHub Archive into my database
    and while running profiling it's showing that 97.9% of the time is spent in
    runtime.mach_semaphore_timedwait:

    blackdog2:sky benbjohnson$ go tool pprof
    http://localhost:8585/debug/pprof/profile
    Read http://localhost:8585/debug/pprof/symbol
    Gathering CPU profile from
    http://localhost:8585/debug/pprof/profile?seconds=30 for 30 seconds to
    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
    Be patient...
    Wrote profile to
    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
    Welcome to pprof! For help, type 'help'.
    (pprof) top10
    Total: 1504 samples
    1473 97.9% 97.9% 1473 97.9% runtime.mach_semaphore_timedwait
    18 1.2% 99.1% 18 1.2% runtime.sigprocmask
    13 0.9% 100.0% 13 0.9% runtime.mach_semaphore_signal
    0 0.0% 100.0% 31 2.1% net.(*pollServer).Run
    0 0.0% 100.0% 13 0.9% net.(*pollServer).WakeFD
    0 0.0% 100.0% 18 1.2% net.(*pollster).WaitFD
    0 0.0% 100.0% 1473 97.9% runtime.MHeap_Scavenger
    0 0.0% 100.0% 13 0.9% runtime.chansend
    0 0.0% 100.0% 13 0.9% runtime.chansend1
    0 0.0% 100.0% 18 1.2% runtime.exitsyscall

    The database demultiplexes the Go HTTP router through a single channel to
    grab a reference to the table and then it sends the actual processing of the
    insert of the data to a separate event loop inside a servlet.

    Project code: https://github.com/skydb/sky/tree/go
    HTTP handler for insert:
    https://github.com/skydb/sky/blob/go/skyd/server_event_handlers.go#L102

    The server is running at about 110% CPU (on my dual-core MacBook Pro) and
    the importer is inserting one item at a time. I'm getting throughput of
    about 400 items/second which seems low.

    What is causing the runtime.mach_semaphore_timedwait to occur? I was
    thinking that it could just be idle waiting for the importer in between
    inserts but the CPU usage seems really high for that.


    Ben Johnson
    http://skydb.io

    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/G1uBDhG-TCE/unsubscribe?hl=en-US.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.



  • Dave Cheney at Mar 30, 2013 at 12:46 am
    Thanks. That output looks fine, there is no obvious blocking for the
    gc, in fact you're barely using any heap at all.
    On Sat, Mar 30, 2013 at 11:42 AM, Ben Johnson wrote:
    Dave-

    Thanks for the help. I attached the output of running with GOGCTRACE=1. I'll
    setup a linux box to profile on as well and get back to you with those
    results.


    Ben



    On Mar 29, 2013, at 6:13 PM, Dave Cheney wrote:

    Can you try two things for me.

    1. run your program under GOGCTRACE=1, you don't need to be profiling.
    This may confirm a suspicion that you have a spinning goroutine
    blocking progress of the gc.

    2. profile it on a system other than OS X.

    Cheers

    Dave

    On Sat, Mar 30, 2013 at 8:56 AM, Ben Johnson wrote:

    Thanks, Carlos. I attached the svg output. This one doesn't actually show

    any of my code though. I assume it's because all the samples that were taken

    were done in the runtime. The svg output before I removed the server level

    channel did show some of my code but I don't have a copy any longer. It

    seems to all be coming from runtime.MHeap_Scavenger. Would the GC waiting on

    something?


    Ben


    --

    You received this message because you are subscribed to the Google Groups

    "golang-nuts" group.

    To unsubscribe from this group and stop receiving emails from it, send an

    email to golang-nuts+unsubscribe@googlegroups.com.

    For more options, visit https://groups.google.com/groups/opt_out.





    On Mar 29, 2013, at 3:31 PM, Carlos Castillo wrote:


    Have you looked at the result of the web command in pprof? It shows a

    graphical representation of the profile, and you can see which code is

    calling mach_semaphore_timedwait (and with what frequency). Sometimes I see

    the problem better with this view of the data.


    Also, are you using go-1.0.3 or tip (soon to be go-1.1)? Tip includes block

    profiling, where you can find out how long and where your code is blocking.


    On Friday, March 29, 2013 9:46:53 AM UTC-7, b...@skylandlabs.com wrote:


    I recently rewrote my open source database in Go and it's been a great

    experience. I ran into a profiling issue that I couldn't find anything on

    Google about. I'm importing data from the GitHub Archive into my database

    and while running profiling it's showing that 97.9% of the time is spent in

    runtime.mach_semaphore_timedwait:


    blackdog2:sky benbjohnson$ go tool pprof

    http://localhost:8585/debug/pprof/profile

    Read http://localhost:8585/debug/pprof/symbol

    Gathering CPU profile from

    http://localhost:8585/debug/pprof/profile?seconds=30 for 30 seconds to

    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585

    Be patient...

    Wrote profile to

    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585

    Welcome to pprof! For help, type 'help'.

    (pprof) top10

    Total: 1504 samples

    1473 97.9% 97.9% 1473 97.9% runtime.mach_semaphore_timedwait

    18 1.2% 99.1% 18 1.2% runtime.sigprocmask

    13 0.9% 100.0% 13 0.9% runtime.mach_semaphore_signal

    0 0.0% 100.0% 31 2.1% net.(*pollServer).Run

    0 0.0% 100.0% 13 0.9% net.(*pollServer).WakeFD

    0 0.0% 100.0% 18 1.2% net.(*pollster).WaitFD

    0 0.0% 100.0% 1473 97.9% runtime.MHeap_Scavenger

    0 0.0% 100.0% 13 0.9% runtime.chansend

    0 0.0% 100.0% 13 0.9% runtime.chansend1

    0 0.0% 100.0% 18 1.2% runtime.exitsyscall


    The database demultiplexes the Go HTTP router through a single channel to

    grab a reference to the table and then it sends the actual processing of the

    insert of the data to a separate event loop inside a servlet.


    Project code: https://github.com/skydb/sky/tree/go

    HTTP handler for insert:

    https://github.com/skydb/sky/blob/go/skyd/server_event_handlers.go#L102


    The server is running at about 110% CPU (on my dual-core MacBook Pro) and

    the importer is inserting one item at a time. I'm getting throughput of

    about 400 items/second which seems low.


    What is causing the runtime.mach_semaphore_timedwait to occur? I was

    thinking that it could just be idle waiting for the importer in between

    inserts but the CPU usage seems really high for that.



    Ben Johnson

    http://skydb.io



    --

    You received this message because you are subscribed to a topic in the

    Google Groups "golang-nuts" group.

    To unsubscribe from this topic, visit

    https://groups.google.com/d/topic/golang-nuts/G1uBDhG-TCE/unsubscribe?hl=en-US.

    To unsubscribe from this group and all its topics, send an email to

    golang-nuts+unsubscribe@googlegroups.com.

    For more options, visit https://groups.google.com/groups/opt_out.






    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Mar 30, 2013 at 5:02 am
    Looks like profiler bug -- all profiling signals are delivered to the
    blocked scavenger thread. Perhaps you are using unsupported version of
    OS X.

    On Sat, Mar 30, 2013 at 1:56 AM, Ben Johnson wrote:
    Thanks, Carlos. I attached the svg output. This one doesn't actually show
    any of my code though. I assume it's because all the samples that were taken
    were done in the runtime. The svg output before I removed the server level
    channel did show some of my code but I don't have a copy any longer. It
    seems to all be coming from runtime.MHeap_Scavenger. Would the GC waiting on
    something?

    Ben

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.




    On Mar 29, 2013, at 3:31 PM, Carlos Castillo wrote:

    Have you looked at the result of the web command in pprof? It shows a
    graphical representation of the profile, and you can see which code is
    calling mach_semaphore_timedwait (and with what frequency). Sometimes I see
    the problem better with this view of the data.

    Also, are you using go-1.0.3 or tip (soon to be go-1.1)? Tip includes block
    profiling, where you can find out how long and where your code is blocking.
    On Friday, March 29, 2013 9:46:53 AM UTC-7, b...@skylandlabs.com wrote:

    I recently rewrote my open source database in Go and it's been a great
    experience. I ran into a profiling issue that I couldn't find anything on
    Google about. I'm importing data from the GitHub Archive into my database
    and while running profiling it's showing that 97.9% of the time is spent in
    runtime.mach_semaphore_timedwait:

    blackdog2:sky benbjohnson$ go tool pprof
    http://localhost:8585/debug/pprof/profile
    Read http://localhost:8585/debug/pprof/symbol
    Gathering CPU profile from
    http://localhost:8585/debug/pprof/profile?seconds=30 for 30 seconds to
    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
    Be patient...
    Wrote profile to
    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
    Welcome to pprof! For help, type 'help'.
    (pprof) top10
    Total: 1504 samples
    1473 97.9% 97.9% 1473 97.9% runtime.mach_semaphore_timedwait
    18 1.2% 99.1% 18 1.2% runtime.sigprocmask
    13 0.9% 100.0% 13 0.9% runtime.mach_semaphore_signal
    0 0.0% 100.0% 31 2.1% net.(*pollServer).Run
    0 0.0% 100.0% 13 0.9% net.(*pollServer).WakeFD
    0 0.0% 100.0% 18 1.2% net.(*pollster).WaitFD
    0 0.0% 100.0% 1473 97.9% runtime.MHeap_Scavenger
    0 0.0% 100.0% 13 0.9% runtime.chansend
    0 0.0% 100.0% 13 0.9% runtime.chansend1
    0 0.0% 100.0% 18 1.2% runtime.exitsyscall

    The database demultiplexes the Go HTTP router through a single channel to
    grab a reference to the table and then it sends the actual processing of the
    insert of the data to a separate event loop inside a servlet.

    Project code: https://github.com/skydb/sky/tree/go
    HTTP handler for insert:
    https://github.com/skydb/sky/blob/go/skyd/server_event_handlers.go#L102

    The server is running at about 110% CPU (on my dual-core MacBook Pro) and
    the importer is inserting one item at a time. I'm getting throughput of
    about 400 items/second which seems low.

    What is causing the runtime.mach_semaphore_timedwait to occur? I was
    thinking that it could just be idle waiting for the importer in between
    inserts but the CPU usage seems really high for that.


    Ben Johnson
    http://skydb.io

    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/G1uBDhG-TCE/unsubscribe?hl=en-US.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.



    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Ben Johnson at Apr 1, 2013 at 4:44 am
    Thanks again to everyone that helped me out. I spent the weekend doing some research and reworking a lot of the structure of my database. I had a misunderstanding about how LevelDB locks iterators so I thought I had to serialize access. Luckily that's not the case so I dropped my message loops and got everything down to a single lock per servlet and the code is greatly simplified. Hurray!

    It also turns out that the GitHub Archive data is not strictly chronological as I originally thought so I was performing a lot of random inserts (which my database is not optimized for) instead of appends (which my database is optimized for). When I ran against append only data I was able to get about 2,000+ appends per second which seems more reasonable.

    I'll let you all know once I'm able to profile on Linux and I'll post the results.

    Ingo Oeser- Sky has a Google Group if you're interested (https://groups.google.com/d/forum/skydb). Let me know if I can help.


    Ben

    On Mar 29, 2013, at 11:02 PM, Dmitry Vyukov wrote:

    Looks like profiler bug -- all profiling signals are delivered to the
    blocked scavenger thread. Perhaps you are using unsupported version of
    OS X.

    On Sat, Mar 30, 2013 at 1:56 AM, Ben Johnson wrote:
    Thanks, Carlos. I attached the svg output. This one doesn't actually show
    any of my code though. I assume it's because all the samples that were taken
    were done in the runtime. The svg output before I removed the server level
    channel did show some of my code but I don't have a copy any longer. It
    seems to all be coming from runtime.MHeap_Scavenger. Would the GC waiting on
    something?

    Ben

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.




    On Mar 29, 2013, at 3:31 PM, Carlos Castillo wrote:

    Have you looked at the result of the web command in pprof? It shows a
    graphical representation of the profile, and you can see which code is
    calling mach_semaphore_timedwait (and with what frequency). Sometimes I see
    the problem better with this view of the data.

    Also, are you using go-1.0.3 or tip (soon to be go-1.1)? Tip includes block
    profiling, where you can find out how long and where your code is blocking.
    On Friday, March 29, 2013 9:46:53 AM UTC-7, b...@skylandlabs.com wrote:

    I recently rewrote my open source database in Go and it's been a great
    experience. I ran into a profiling issue that I couldn't find anything on
    Google about. I'm importing data from the GitHub Archive into my database
    and while running profiling it's showing that 97.9% of the time is spent in
    runtime.mach_semaphore_timedwait:

    blackdog2:sky benbjohnson$ go tool pprof
    http://localhost:8585/debug/pprof/profile
    Read http://localhost:8585/debug/pprof/symbol
    Gathering CPU profile from
    http://localhost:8585/debug/pprof/profile?seconds=30 for 30 seconds to
    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
    Be patient...
    Wrote profile to
    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585
    Welcome to pprof! For help, type 'help'.
    (pprof) top10
    Total: 1504 samples
    1473 97.9% 97.9% 1473 97.9% runtime.mach_semaphore_timedwait
    18 1.2% 99.1% 18 1.2% runtime.sigprocmask
    13 0.9% 100.0% 13 0.9% runtime.mach_semaphore_signal
    0 0.0% 100.0% 31 2.1% net.(*pollServer).Run
    0 0.0% 100.0% 13 0.9% net.(*pollServer).WakeFD
    0 0.0% 100.0% 18 1.2% net.(*pollster).WaitFD
    0 0.0% 100.0% 1473 97.9% runtime.MHeap_Scavenger
    0 0.0% 100.0% 13 0.9% runtime.chansend
    0 0.0% 100.0% 13 0.9% runtime.chansend1
    0 0.0% 100.0% 18 1.2% runtime.exitsyscall

    The database demultiplexes the Go HTTP router through a single channel to
    grab a reference to the table and then it sends the actual processing of the
    insert of the data to a separate event loop inside a servlet.

    Project code: https://github.com/skydb/sky/tree/go
    HTTP handler for insert:
    https://github.com/skydb/sky/blob/go/skyd/server_event_handlers.go#L102

    The server is running at about 110% CPU (on my dual-core MacBook Pro) and
    the importer is inserting one item at a time. I'm getting throughput of
    about 400 items/second which seems low.

    What is causing the runtime.mach_semaphore_timedwait to occur? I was
    thinking that it could just be idle waiting for the importer in between
    inserts but the CPU usage seems really high for that.


    Ben Johnson
    http://skydb.io

    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/G1uBDhG-TCE/unsubscribe?hl=en-US.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.



    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Val at Dec 26, 2014 at 8:47 pm
    FWIW, I also encounter the "99% spent in runtime.mach_semaphore_timedwait"
    in my toy program.

    On OSX 10.9.3 with go1.3, then go1.4 as well, but not every time (I
    sometimes get lucky and produce a meaningful graph).
    This corroborates the previous posts in this thread. My code does only
    heavy computation so it looks a bit like a "spinning loop": no channels,
    almost no heap allocation, very few function calls.

    I can't give the equivalent results in Linux, but will be able to in a few
    days.

    On Monday, April 1, 2013 6:44:10 AM UTC+2, Ben Johnson wrote:

    Thanks again to everyone that helped me out. I spent the weekend doing
    some research and reworking a lot of the structure of my database. I had a
    misunderstanding about how LevelDB locks iterators so I thought I had to
    serialize access. Luckily that's not the case so I dropped my message loops
    and got everything down to a single lock per servlet and the code is
    *greatly* simplified. Hurray!

    It also turns out that the GitHub Archive data is not strictly
    chronological as I originally thought so I was performing a lot of random
    inserts (which my database is not optimized for) instead of appends (which
    my database is optimized for). When I ran against append only data I was
    able to get about 2,000+ appends per second which seems more reasonable.

    I'll let you all know once I'm able to profile on Linux and I'll post the
    results.

    Ingo Oeser- Sky has a Google Group if you're interested (
    https://groups.google.com/d/forum/skydb). Let me know if I can help.


    Ben


    On Mar 29, 2013, at 11:02 PM, Dmitry Vyukov wrote:

    Looks like profiler bug -- all profiling signals are delivered to the
    blocked scavenger thread. Perhaps you are using unsupported version of
    OS X.


    On Sat, Mar 30, 2013 at 1:56 AM, Ben Johnson <b...@skylandlabs.com
    <javascript:>> wrote:

    Thanks, Carlos. I attached the svg output. This one doesn't actually show

    any of my code though. I assume it's because all the samples that were
    taken

    were done in the runtime. The svg output before I removed the server level

    channel did show some of my code but I don't have a copy any longer. It

    seems to all be coming from runtime.MHeap_Scavenger. Would the GC waiting
    on

    something?


    Ben


    --

    You received this message because you are subscribed to the Google Groups

    "golang-nuts" group.

    To unsubscribe from this group and stop receiving emails from it, send an

    email to golang-nuts...@googlegroups.com <javascript:>.

    For more options, visit https://groups.google.com/groups/opt_out.





    On Mar 29, 2013, at 3:31 PM, Carlos Castillo wrote:


    Have you looked at the result of the web command in pprof? It shows a

    graphical representation of the profile, and you can see which code is

    calling mach_semaphore_timedwait (and with what frequency). Sometimes I see

    the problem better with this view of the data.


    Also, are you using go-1.0.3 or tip (soon to be go-1.1)? Tip includes block

    profiling, where you can find out how long and where your code is blocking.


    On Friday, March 29, 2013 9:46:53 AM UTC-7, b...@skylandlabs.com wrote:


    I recently rewrote my open source database in Go and it's been a great

    experience. I ran into a profiling issue that I couldn't find anything on

    Google about. I'm importing data from the GitHub Archive into my database

    and while running profiling it's showing that 97.9% of the time is spent in

    runtime.mach_semaphore_timedwait:


    blackdog2:sky benbjohnson$ go tool pprof

    http://localhost:8585/debug/pprof/profile

    Read http://localhost:8585/debug/pprof/symbol

    Gathering CPU profile from

    http://localhost:8585/debug/pprof/profile?seconds=30 for 30 seconds to

    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585

    Be patient...

    Wrote profile to

    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585

    Welcome to pprof! For help, type 'help'.

    (pprof) top10

    Total: 1504 samples

    1473 97.9% 97.9% 1473 97.9% runtime.mach_semaphore_timedwait

    18 1.2% 99.1% 18 1.2% runtime.sigprocmask

    13 0.9% 100.0% 13 0.9% runtime.mach_semaphore_signal

    0 0.0% 100.0% 31 2.1% net.(*pollServer).Run

    0 0.0% 100.0% 13 0.9% net.(*pollServer).WakeFD

    0 0.0% 100.0% 18 1.2% net.(*pollster).WaitFD

    0 0.0% 100.0% 1473 97.9% runtime.MHeap_Scavenger

    0 0.0% 100.0% 13 0.9% runtime.chansend

    0 0.0% 100.0% 13 0.9% runtime.chansend1

    0 0.0% 100.0% 18 1.2% runtime.exitsyscall


    The database demultiplexes the Go HTTP router through a single channel to

    grab a reference to the table and then it sends the actual processing of
    the

    insert of the data to a separate event loop inside a servlet.


    Project code: https://github.com/skydb/sky/tree/go

    HTTP handler for insert:

    https://github.com/skydb/sky/blob/go/skyd/server_event_handlers.go#L102


    The server is running at about 110% CPU (on my dual-core MacBook Pro) and

    the importer is inserting one item at a time. I'm getting throughput of

    about 400 items/second which seems low.


    What is causing the runtime.mach_semaphore_timedwait to occur? I was

    thinking that it could just be idle waiting for the importer in between

    inserts but the CPU usage seems really high for that.



    Ben Johnson

    http://skydb.io



    --

    You received this message because you are subscribed to a topic in the

    Google Groups "golang-nuts" group.

    To unsubscribe from this topic, visit


    https://groups.google.com/d/topic/golang-nuts/G1uBDhG-TCE/unsubscribe?hl=en-US
    .

    To unsubscribe from this group and all its topics, send an email to

    golang-nuts...@googlegroups.com <javascript:>.

    For more options, visit https://groups.google.com/groups/opt_out.






    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Jonathan Lawlor at Dec 26, 2014 at 9:11 pm
    This is an old thread, but there is an issue with
    OSX: https://github.com/golang/go/issues/6047
    On Friday, December 26, 2014 3:47:49 PM UTC-5, Val wrote:

    FWIW, I also encounter the "99% spent in runtime.mach_semaphore_timedwait"
    in my toy program.

    On OSX 10.9.3 with go1.3, then go1.4 as well, but not every time (I
    sometimes get lucky and produce a meaningful graph).
    This corroborates the previous posts in this thread. My code does only
    heavy computation so it looks a bit like a "spinning loop": no channels,
    almost no heap allocation, very few function calls.

    I can't give the equivalent results in Linux, but will be able to in a few
    days.

    On Monday, April 1, 2013 6:44:10 AM UTC+2, Ben Johnson wrote:

    Thanks again to everyone that helped me out. I spent the weekend doing
    some research and reworking a lot of the structure of my database. I had a
    misunderstanding about how LevelDB locks iterators so I thought I had to
    serialize access. Luckily that's not the case so I dropped my message loops
    and got everything down to a single lock per servlet and the code is
    *greatly* simplified. Hurray!

    It also turns out that the GitHub Archive data is not strictly
    chronological as I originally thought so I was performing a lot of random
    inserts (which my database is not optimized for) instead of appends (which
    my database is optimized for). When I ran against append only data I was
    able to get about 2,000+ appends per second which seems more reasonable.

    I'll let you all know once I'm able to profile on Linux and I'll post the
    results.

    Ingo Oeser- Sky has a Google Group if you're interested (
    https://groups.google.com/d/forum/skydb). Let me know if I can help.


    Ben


    On Mar 29, 2013, at 11:02 PM, Dmitry Vyukov wrote:

    Looks like profiler bug -- all profiling signals are delivered to the
    blocked scavenger thread. Perhaps you are using unsupported version of
    OS X.


    On Sat, Mar 30, 2013 at 1:56 AM, Ben Johnson <b...@skylandlabs.com>
    wrote:

    Thanks, Carlos. I attached the svg output. This one doesn't actually show

    any of my code though. I assume it's because all the samples that were
    taken

    were done in the runtime. The svg output before I removed the server level

    channel did show some of my code but I don't have a copy any longer. It

    seems to all be coming from runtime.MHeap_Scavenger. Would the GC waiting
    on

    something?


    Ben


    --

    You received this message because you are subscribed to the Google Groups

    "golang-nuts" group.

    To unsubscribe from this group and stop receiving emails from it, send an

    email to golang-nuts...@googlegroups.com.

    For more options, visit https://groups.google.com/groups/opt_out.





    On Mar 29, 2013, at 3:31 PM, Carlos Castillo wrote:


    Have you looked at the result of the web command in pprof? It shows a

    graphical representation of the profile, and you can see which code is

    calling mach_semaphore_timedwait (and with what frequency). Sometimes I
    see

    the problem better with this view of the data.


    Also, are you using go-1.0.3 or tip (soon to be go-1.1)? Tip includes
    block

    profiling, where you can find out how long and where your code is
    blocking.


    On Friday, March 29, 2013 9:46:53 AM UTC-7, b...@skylandlabs.com wrote:


    I recently rewrote my open source database in Go and it's been a great

    experience. I ran into a profiling issue that I couldn't find anything on

    Google about. I'm importing data from the GitHub Archive into my database

    and while running profiling it's showing that 97.9% of the time is spent
    in

    runtime.mach_semaphore_timedwait:


    blackdog2:sky benbjohnson$ go tool pprof

    http://localhost:8585/debug/pprof/profile

    Read http://localhost:8585/debug/pprof/symbol

    Gathering CPU profile from

    http://localhost:8585/debug/pprof/profile?seconds=30 for 30 seconds to

    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585

    Be patient...

    Wrote profile to

    /Users/benbjohnson/pprof/a.out.1364574658.localhost-port8585

    Welcome to pprof! For help, type 'help'.

    (pprof) top10

    Total: 1504 samples

    1473 97.9% 97.9% 1473 97.9% runtime.mach_semaphore_timedwait

    18 1.2% 99.1% 18 1.2% runtime.sigprocmask

    13 0.9% 100.0% 13 0.9% runtime.mach_semaphore_signal

    0 0.0% 100.0% 31 2.1% net.(*pollServer).Run

    0 0.0% 100.0% 13 0.9% net.(*pollServer).WakeFD

    0 0.0% 100.0% 18 1.2% net.(*pollster).WaitFD

    0 0.0% 100.0% 1473 97.9% runtime.MHeap_Scavenger

    0 0.0% 100.0% 13 0.9% runtime.chansend

    0 0.0% 100.0% 13 0.9% runtime.chansend1

    0 0.0% 100.0% 18 1.2% runtime.exitsyscall


    The database demultiplexes the Go HTTP router through a single channel to

    grab a reference to the table and then it sends the actual processing of
    the

    insert of the data to a separate event loop inside a servlet.


    Project code: https://github.com/skydb/sky/tree/go

    HTTP handler for insert:

    https://github.com/skydb/sky/blob/go/skyd/server_event_handlers.go#L102


    The server is running at about 110% CPU (on my dual-core MacBook Pro) and

    the importer is inserting one item at a time. I'm getting throughput of

    about 400 items/second which seems low.


    What is causing the runtime.mach_semaphore_timedwait to occur? I was

    thinking that it could just be idle waiting for the importer in between

    inserts but the CPU usage seems really high for that.



    Ben Johnson

    http://skydb.io



    --

    You received this message because you are subscribed to a topic in the

    Google Groups "golang-nuts" group.

    To unsubscribe from this topic, visit


    https://groups.google.com/d/topic/golang-nuts/G1uBDhG-TCE/unsubscribe?hl=en-US
    .

    To unsubscribe from this group and all its topics, send an email to

    golang-nuts...@googlegroups.com.

    For more options, visit https://groups.google.com/groups/opt_out.






    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedMar 29, '13 at 5:21p
activeDec 26, '14 at 9:11p
posts13
users8
websitegolang.org

People

Translate

site design / logo © 2021 Grokbase