FAQ
So this (part of the) app is really simple and this particular goroutine
usage is also not going to grow more complex really:

There's a "main goroutine" that loops, in each loop iteration fires off two
separate goroutines, does some stuff on its own, then waits for the 2
routines to finish, then next loop iteration.

The waiting is done with two booleans (say, jobFooDone and jobBarDone) in
the scope of the main goroutine. This may seem archaic but in this really
quite simple scenario I don't see how it could go wrong -- can you?

Each boolean is only ever set to false by the main routine (beginning of
the loop iteration prior to launching the goroutine), and only ever set to
true by the job goroutine (just before it returns -- btw would be great if
we could defer non-func-call statements). Hardly a real pressing "clean &
proper" syncing need here, or is there? (Not talking about communicating
other data between the goroutines right here, just the waiting part that
WaitGroup would also address.)

I can see in waitgroup.go all sorts of atomic operations being done and I
can see how that will be very very handy when the number of goroutines
being launched is dynamic/unknown/greater than 2 or 3. But for my simple
case, would there be any benefit or indeed necessity to switch to WaitGroup
from my two simple boolean vars?

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Search Discussions

  • Steve wang at Feb 4, 2013 at 1:54 pm
    It's better to show your code prior to discussion.
    On Monday, February 4, 2013 9:35:09 PM UTC+8, Philipp Schumann wrote:

    So this (part of the) app is really simple and this particular goroutine
    usage is also not going to grow more complex really:

    There's a "main goroutine" that loops, in each loop iteration fires off
    two separate goroutines, does some stuff on its own, then waits for the 2
    routines to finish, then next loop iteration.

    The waiting is done with two booleans (say, jobFooDone and jobBarDone) in
    the scope of the main goroutine. This may seem archaic but in this really
    quite simple scenario I don't see how it could go wrong -- can you?

    Each boolean is only ever set to false by the main routine (beginning of
    the loop iteration prior to launching the goroutine), and only ever set to
    true by the job goroutine (just before it returns -- btw would be great if
    we could defer non-func-call statements). Hardly a real pressing "clean &
    proper" syncing need here, or is there? (Not talking about communicating
    other data between the goroutines right here, just the waiting part that
    WaitGroup would also address.)

    I can see in waitgroup.go all sorts of atomic operations being done and I
    can see how that will be very very handy when the number of goroutines
    being launched is dynamic/unknown/greater than 2 or 3. But for my simple
    case, would there be any benefit or indeed necessity to switch to WaitGroup
    from my two simple boolean vars?
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Minux at Feb 4, 2013 at 2:01 pm

    On Mon, Feb 4, 2013 at 9:35 PM, Philipp Schumann wrote:

    So this (part of the) app is really simple and this particular goroutine
    usage is also not going to grow more complex really:

    There's a "main goroutine" that loops, in each loop iteration fires off
    two separate goroutines, does some stuff on its own, then waits for the 2
    routines to finish, then next loop iteration.

    The waiting is done with two booleans (say, jobFooDone and jobBarDone) in
    the scope of the main goroutine. This may seem archaic but in this really
    quite simple scenario I don't see how it could go wrong -- can you?

    Each boolean is only ever set to false by the main routine (beginning of
    the loop iteration prior to launching the goroutine), and only ever set to
    true by the job goroutine (just before it returns -- btw would be great if
    we could defer non-func-call statements). Hardly a real pressing "clean &
    proper" syncing need here, or is there? (Not talking about communicating
    other data between the goroutines right here, just the waiting part that
    WaitGroup would also address.)

    I can see in waitgroup.go all sorts of atomic operations being done and I
    can see how that will be very very handy when the number of goroutines
    being launched is dynamic/unknown/greater than 2 or 3. But for my simple
    case, would there be any benefit or indeed necessity to switch to WaitGroup
    from my two simple boolean vars?
    using sync.WaitGroup is quite standard for these kind of things, and i
    think if you have more than 2 goroutines
    to start, you'd better use that to help your reader.

    If you're only starting one goroutine, using a done channel is very
    idiomatic.

    I'd suggest you change your code, and don't use low-level atomic operations
    when
    there are higher-level synchronization primitives available.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Roger peppe at Feb 4, 2013 at 2:07 pm

    On 4 February 2013 13:35, Philipp Schumann wrote:
    So this (part of the) app is really simple and this particular goroutine
    usage is also not going to grow more complex really:

    There's a "main goroutine" that loops, in each loop iteration fires off two
    separate goroutines, does some stuff on its own, then waits for the 2
    routines to finish, then next loop iteration.

    The waiting is done with two booleans (say, jobFooDone and jobBarDone) in
    the scope of the main goroutine. This may seem archaic but in this really
    quite simple scenario I don't see how it could go wrong -- can you?

    Each boolean is only ever set to false by the main routine (beginning of the
    loop iteration prior to launching the goroutine), and only ever set to true
    by the job goroutine (just before it returns
    If you're reading a variable in one goroutine that's being set by another
    and you don't have any synchronisation between them, then your
    program behaviour is undefined.

    See http://golang.org/ref/mem

    Also, as Steve Wang says, we can't really tell anything without seeing
    the code in question.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Philipp Schumann at Feb 4, 2013 at 7:30 pm
    Thanks everyone. I agree using WaitGroup would result in much better
    readability.

    I also (think I) fully understand what rog is saying here:
    If you're reading a variable in one goroutine that's being set by another
    and you don't have any synchronisation between them, then your
    program behaviour is undefined.
    BUT just for the sake of this thought-experiment. IF said variable is a
    bool, when reading it, the program can only read true or false. Consider
    exactly this minimal step-by-step scenario:

    - main() sets the bool 'done' to false prior to any go-routine-call. main()
    is the only one ever reading 'done' and the only one ever setting it to
    false
    - then it calls the go-routine, which does stuff in parallel, then
    proceeds to do some of its own stuff, then waits (loops-blocks) until
    reading from 'done' yields true
    - this can only possibly happen after the go-routine did get the chance to
    set it to true prior to returning, which is the only 'done' access the
    go-routine performs

    Now, to be sure, I will most likely switch to a WaitGroup or a done channel
    anyway because of the very good arguments presented previously. But just to
    satisfy a budding Go Geek's technical curiosity... how can the above
    minimal setup ever become 'undefined'? As it stands, it seems like a highly
    undesirable and crude "manual sync"... but it does keep in a way a
    "synchronized busy-or-done state" and an effective wait, doesn't it?

    On Monday, February 4, 2013 9:07:51 PM UTC+7, rog wrote:
    On 4 February 2013 13:35, Philipp Schumann wrote:
    So this (part of the) app is really simple and this particular goroutine
    usage is also not going to grow more complex really:

    There's a "main goroutine" that loops, in each loop iteration fires off two
    separate goroutines, does some stuff on its own, then waits for the 2
    routines to finish, then next loop iteration.

    The waiting is done with two booleans (say, jobFooDone and jobBarDone) in
    the scope of the main goroutine. This may seem archaic but in this really
    quite simple scenario I don't see how it could go wrong -- can you?

    Each boolean is only ever set to false by the main routine (beginning of the
    loop iteration prior to launching the goroutine), and only ever set to true
    by the job goroutine (just before it returns
    If you're reading a variable in one goroutine that's being set by another
    and you don't have any synchronisation between them, then your
    program behaviour is undefined.

    See http://golang.org/ref/mem

    Also, as Steve Wang says, we can't really tell anything without seeing
    the code in question.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Philipp Schumann at Feb 4, 2013 at 7:35 pm
    OK now checked out golang.org/ref/mem as suggest by rog and indeed I find
    this paragraph:

    Another incorrect idiom is busy waiting for a value, as in:

    var a string
    var done bool

    func setup() {
    a = "hello, world"
    done = true
    }

    func main() {
    go setup()
    for !done {
    }
    print(a)
    }

    As before, there is no guarantee that, in main, observing the write to done implies
    observing the write to a, so this program could print an empty string too.
    Worse, there is no guarantee that the write todone will ever be observed by
    main, since there are no synchronization events between the two threads.
    The loop in main is not guaranteed to finish.


    Well... that confirms I *really* cannot do the bool 'done' flag thing even
    if I wanted to. Thanks for pointing me there!

    On Tuesday, February 5, 2013 2:29:58 AM UTC+7, Philipp Schumann wrote:

    Thanks everyone. I agree using WaitGroup would result in much better
    readability.

    I also (think I) fully understand what rog is saying here:
    If you're reading a variable in one goroutine that's being set by another
    and you don't have any synchronisation between them, then your
    program behaviour is undefined.
    BUT just for the sake of this thought-experiment. IF said variable is a
    bool, when reading it, the program can only read true or false. Consider
    exactly this minimal step-by-step scenario:

    - main() sets the bool 'done' to false prior to any go-routine-call.
    main() is the only one ever reading 'done' and the only one ever setting it
    to false
    - then it calls the go-routine, which does stuff in parallel, then
    proceeds to do some of its own stuff, then waits (loops-blocks) until
    reading from 'done' yields true
    - this can only possibly happen after the go-routine did get the chance to
    set it to true prior to returning, which is the only 'done' access the
    go-routine performs

    Now, to be sure, I will most likely switch to a WaitGroup or a done
    channel anyway because of the very good arguments presented previously. But
    just to satisfy a budding Go Geek's technical curiosity... how can the
    above minimal setup ever become 'undefined'? As it stands, it seems like a
    highly undesirable and crude "manual sync"... but it does keep in a way a
    "synchronized busy-or-done state" and an effective wait, doesn't it?

    On Monday, February 4, 2013 9:07:51 PM UTC+7, rog wrote:

    On 4 February 2013 13:35, Philipp Schumann <philipp....@gmail.com>
    wrote:
    So this (part of the) app is really simple and this particular goroutine
    usage is also not going to grow more complex really:

    There's a "main goroutine" that loops, in each loop iteration fires off two
    separate goroutines, does some stuff on its own, then waits for the 2
    routines to finish, then next loop iteration.

    The waiting is done with two booleans (say, jobFooDone and jobBarDone) in
    the scope of the main goroutine. This may seem archaic but in this really
    quite simple scenario I don't see how it could go wrong -- can you?

    Each boolean is only ever set to false by the main routine (beginning of the
    loop iteration prior to launching the goroutine), and only ever set to true
    by the job goroutine (just before it returns
    If you're reading a variable in one goroutine that's being set by another
    and you don't have any synchronisation between them, then your
    program behaviour is undefined.

    See http://golang.org/ref/mem

    Also, as Steve Wang says, we can't really tell anything without seeing
    the code in question.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • David Anderson at Feb 4, 2013 at 7:43 pm
    And for a general treatment of the topic, I suggest reading
    http://software.intel.com/en-us/blogs/2013/01/06/benign-data-races-what-could-possibly-go-wrong.

    The short version is: there is no such thing as a benign data race. Unless
    you use instructions which make ordering promises (which the sync package
    uses in various places, along with kernel-provided primitives), both the
    compiler and the CPU will make optimizations that assumes no concurrent
    access. If you *do* access that data concurrently, the memory location you
    are accessing could contain *anything*, up to and including the value of a
    different variable, an intermediate result of a different computation, or a
    random value. You have no way of knowing.

    - Dave

    On Mon, Feb 4, 2013 at 11:35 AM, Philipp Schumann wrote:

    OK now checked out golang.org/ref/mem as suggest by rog and indeed I find
    this paragraph:

    Another incorrect idiom is busy waiting for a value, as in:


    var a string
    var done bool

    func setup() {
    a = "hello, world"
    done = true
    }

    func main() {
    go setup()
    for !done {
    }
    print(a)
    }

    As before, there is no guarantee that, in main, observing the write to
    done implies observing the write to a, so this program could print an
    empty string too. Worse, there is no guarantee that the write todone will
    ever be observed by main, since there are no synchronization events
    between the two threads. The loop in main is not guaranteed to finish.


    Well... that confirms I *really* cannot do the bool 'done' flag thing
    even if I wanted to. Thanks for pointing me there!


    On Tuesday, February 5, 2013 2:29:58 AM UTC+7, Philipp Schumann wrote:

    Thanks everyone. I agree using WaitGroup would result in much better
    readability.

    I also (think I) fully understand what rog is saying here:
    If you're reading a variable in one goroutine that's being set by another
    and you don't have any synchronisation between them, then your
    program behaviour is undefined.
    BUT just for the sake of this thought-experiment. IF said variable is a
    bool, when reading it, the program can only read true or false. Consider
    exactly this minimal step-by-step scenario:

    - main() sets the bool 'done' to false prior to any go-routine-call.
    main() is the only one ever reading 'done' and the only one ever setting it
    to false
    - then it calls the go-routine, which does stuff in parallel, then
    proceeds to do some of its own stuff, then waits (loops-blocks) until
    reading from 'done' yields true
    - this can only possibly happen after the go-routine did get the chance to
    set it to true prior to returning, which is the only 'done' access the
    go-routine performs

    Now, to be sure, I will most likely switch to a WaitGroup or a done
    channel anyway because of the very good arguments presented previously. But
    just to satisfy a budding Go Geek's technical curiosity... how can the
    above minimal setup ever become 'undefined'? As it stands, it seems like a
    highly undesirable and crude "manual sync"... but it does keep in a way a
    "synchronized busy-or-done state" and an effective wait, doesn't it?

    On Monday, February 4, 2013 9:07:51 PM UTC+7, rog wrote:

    On 4 February 2013 13:35, Philipp Schumann <philipp....@gmail.com>
    wrote:
    So this (part of the) app is really simple and this particular goroutine
    usage is also not going to grow more complex really:

    There's a "main goroutine" that loops, in each loop iteration fires off two
    separate goroutines, does some stuff on its own, then waits for the 2
    routines to finish, then next loop iteration.

    The waiting is done with two booleans (say, jobFooDone and jobBarDone) in
    the scope of the main goroutine. This may seem archaic but in this really
    quite simple scenario I don't see how it could go wrong -- can you?

    Each boolean is only ever set to false by the main routine (beginning of the
    loop iteration prior to launching the goroutine), and only ever set to true
    by the job goroutine (just before it returns
    If you're reading a variable in one goroutine that's being set by
    another
    and you don't have any synchronisation between them, then your
    program behaviour is undefined.

    See http://golang.org/ref/mem

    Also, as Steve Wang says, we can't really tell anything without seeing
    the code in question.
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Kevin Gillette at Feb 4, 2013 at 7:57 pm
    And even if concurrent access of that form was not a problem, a simple busy wait, in the current implementation, would prevent other goroutines from being scheduled onto that thread (meaning deadlock when GOMAXPROCS=1) in addition to guzzling cycles. Some of the locking in go is optimized to do a small amount of optimistic busy waiting before blocking (though I believe this is in the scheduler), so those efficiency ideas have already been given attention to some degree.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Philipp Schumann at Feb 4, 2013 at 8:19 pm
    So, all this leads me to wonder: out of the various possible
    synchronization approaches -- channels, Mutex, RwMutex, WaitGroup -- do
    they *all* guarantee that immediately upon synchronization, *all* writes
    (performed in various variables / location) by the "job" goroutine will be
    observed by the "main" goroutine -- if not, which of those do guarantee
    this? If yes, which of them has the least amount of "overhead" /
    under-the-hood-work going on?

    WaitGroup would be fine except "main" does not really wait for both to
    finish -- it waits for JobA to finish, reads what JobA has written, only
    then waits for JobB to finish, then reads what JobB has written.

    I know, I know... channels would solve this easily and I'd be done with it
    in no time. But this thing *is* a library and if a lower-level primitive
    would do the same job, I'd really like to know.

    On Tuesday, February 5, 2013 2:57:12 AM UTC+7, Kevin Gillette wrote:

    And even if concurrent access of that form was not a problem, a simple
    busy wait, in the current implementation, would prevent other goroutines
    from being scheduled onto that thread (meaning deadlock when GOMAXPROCS=1)
    in addition to guzzling cycles. Some of the locking in go is optimized to
    do a small amount of optimistic busy waiting before blocking (though I
    believe this is in the scheduler), so those efficiency ideas have already
    been given attention to some degree.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Ian Lance Taylor at Feb 4, 2013 at 8:33 pm

    On Mon, Feb 4, 2013 at 12:12 PM, Philipp Schumann wrote:
    So, all this leads me to wonder: out of the various possible synchronization
    approaches -- channels, Mutex, RwMutex, WaitGroup -- do they all guarantee
    that immediately upon synchronization, all writes (performed in various
    variables / location) by the "job" goroutine will be observed by the "main"
    goroutine -- if not, which of those do guarantee this? If yes, which of them
    has the least amount of "overhead" / under-the-hood-work going on?
    Yes, they all guarantee it.

    I think Mutex has the least overhead, but, of course, it is also the
    one that is easiest to misuse.

    Ian

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • John Nagle at Feb 4, 2013 at 10:14 pm

    On 2/4/2013 12:12 PM, Philipp Schumann wrote:
    So, all this leads me to wonder: out of the various possible
    synchronization approaches -- channels, Mutex, RwMutex, WaitGroup -- do
    they *all* guarantee that immediately upon synchronization, *all* writes
    (performed in various variables / location) by the "job" goroutine will be
    observed by the "main" goroutine -- if not, which of those do guarantee
    this? If yes, which of them has the least amount of "overhead" /
    under-the-hood-work going on?
    Now we start to get into issues such as "when do Go compilers
    generate memory fences?" In the C/C++ world, there's endless
    trouble around this, and the "volatile" qualifier to give some
    control over it.

    In the x86 world, the CPUs, for historical reasons, provide
    reasonably strong guarantees of sequential memory access
    across CPUs. See

    http://bartoszmilewski.com/2008/11/05/who-ordered-memory-fences-on-an-x86/

    This is less true for ARM CPUs. (If somebody puts Go
    on Itanium or SPARC CPUs, it gets much more complex. See
    http://h21007.www2.hp.com/portal/download/files/unprot/ddk/mem_ordering_pa_ia.pdf
    for what life is like on
    a "relaxed memory" multiprocessor.) This is going to become
    a bigger problem as more relaxed-memory model multiprocessors
    come into use. Like the ARM Cortex-A50, shipping this year,
    which has load-acquire and store-release instructions to
    support "volatile" variables.

    You can presumably rely on any blocking event generating
    a memory fence. One would expect that mutex unlocks would generate
    a memory fence. (Is this documented somewhere?) It came up last June
    that a close event on a channel was not a synchronizing operation,
    although sending and receiving were. So using a channel close
    as a Done flag was unsafe in older versions of Go.
    See "http://comments.gmane.org/gmane.comp.lang.go.general/62107"

    There are some optimizations, like keeping a variable
    in a register during an inner loop, which totally break
    attempts at unlocked synchronization. Spinning on a
    Boolean is a classic example. Go, like C, is fast enough and
    optimized enough that you can hit these problems.

    This is why it's better to send data across channels than
    to share it between goroutines. Then you don't have to think
    about this stuff.

    John Nagle

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Minux at Feb 4, 2013 at 10:24 pm

    On Tue, Feb 5, 2013 at 6:14 AM, John Nagle wrote:

    You can presumably rely on any blocking event generating
    a memory fence. One would expect that mutex unlocks would generate
    a memory fence. (Is this documented somewhere?) It came up last June
    that a close event on a channel was not a synchronizing operation,
    although sending and receiving were. So using a channel close
    as a Done flag was unsafe in older versions of Go.
    See "http://comments.gmane.org/gmane.comp.lang.go.general/62107"
    Did you read that whole thread?
    close() is equivalent to sending data across the channel in this respect, so
    it's always safe to use it as the Done flag (at least i'm reasonably sure
    that it's the case for Go 1.0)

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Ian Lance Taylor at Feb 4, 2013 at 10:25 pm

    On Mon, Feb 4, 2013 at 2:14 PM, John Nagle wrote:
    On 2/4/2013 12:12 PM, Philipp Schumann wrote:
    So, all this leads me to wonder: out of the various possible
    synchronization approaches -- channels, Mutex, RwMutex, WaitGroup -- do
    they *all* guarantee that immediately upon synchronization, *all* writes
    (performed in various variables / location) by the "job" goroutine will be
    observed by the "main" goroutine -- if not, which of those do guarantee
    this? If yes, which of them has the least amount of "overhead" /
    under-the-hood-work going on?
    Now we start to get into issues such as "when do Go compilers
    generate memory fences?" In the C/C++ world, there's endless
    trouble around this, and the "volatile" qualifier to give some
    control over it.
    Go compilers never generate memory fences. Correct programs must use
    the synchronization mechanisms, all of which are based in library
    routines, either Go packages or the runtime support library. Those
    libraries contain the appropriate fence instructions.

    I have to comment that the volatile qualifier in C/C++ does not
    generate any memory fences either. If you have C/C++ code that has a
    race condition, it will still have a race condition if you add the
    volatile qualifier. Volatile is for memory mapped hardware; it is not
    for thread synchronization. (Note that volatile has a different
    meaning in Java, where it can indeed be used for thread
    synchronization.)
    (If somebody puts Go
    on Itanium or SPARC CPUs, it gets much more complex.
    Note that Go already works on SPARC, using gccgo.
    You can presumably rely on any blocking event generating
    a memory fence. One would expect that mutex unlocks would generate
    a memory fence. (Is this documented somewhere?)
    http://golang.org/ref/mem

    It does not use the word memory fence, but it provides guarantees that
    can only be implemented using a memory fence on modern processors.

    Ian

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • John Nagle at Feb 4, 2013 at 11:31 pm

    On 2/4/2013 2:25 PM, Ian Lance Taylor wrote:
    On Mon, Feb 4, 2013 at 2:14 PM, John Nagle wrote:
    Now we start to get into issues such as "when do Go compilers
    generate memory fences?" In the C/C++ world, there's endless
    trouble around this, and the "volatile" qualifier to give some
    control over it.
    Go compilers never generate memory fences. Correct programs must use
    the synchronization mechanisms, all of which are based in library
    routines, either Go packages or the runtime support library. Those
    libraries contain the appropriate fence instructions.
    OK. For ARM, a Data Memory Barrier when a mutex unlocks or
    blocks, or a channel is read, is more than sufficient. ARM
    has a simple fence model. So far, few of the machines with
    memory sync models from hell have achieved much market share.
    For which we can all be grateful.
    I have to comment that the volatile qualifier in C/C++ does not
    generate any memory fences either.
    In general, you're right. For some reason, Intel Itanium
    compilers did generate fences for volatile variables. But
    that seems have been unique to that one compiler.

    http://software.intel.com/en-us/blogs/2007/11/30/volatile-almost-useless-for-multi-threaded-programming
    You can presumably rely on any blocking event generating
    a memory fence. One would expect that mutex unlocks would generate
    a memory fence. (Is this documented somewhere?)
    http://golang.org/ref/mem

    It does not use the word memory fence, but it provides guarantees that
    can only be implemented using a memory fence on modern processors.
    Ah. That's a big help. That's a reasonably user-friendly set of
    rules. It covers the requirement that channel receives are
    fence events for shared data not sent over the channel, which is
    a Go idiom.

    John Nagle


    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Feb 5, 2013 at 5:37 am

    On Tuesday, February 5, 2013 2:14:26 AM UTC+4, John Nagle wrote:
    On 2/4/2013 12:12 PM, Philipp Schumann wrote:
    So, all this leads me to wonder: out of the various possible
    synchronization approaches -- channels, Mutex, RwMutex, WaitGroup -- do
    they *all* guarantee that immediately upon synchronization, *all* writes
    (performed in various variables / location) by the "job" goroutine will be
    observed by the "main" goroutine -- if not, which of those do guarantee
    this? If yes, which of them has the least amount of "overhead" /
    under-the-hood-work going on?
    Now we start to get into issues such as "when do Go compilers
    generate memory fences?" In the C/C++ world, there's endless
    trouble around this, and the "volatile" qualifier to give some
    control over it.

    In the x86 world, the CPUs, for historical reasons, provide
    reasonably strong guarantees of sequential memory access
    across CPUs.


    x86 does not provide guarantee of sequential order across CPUs. Thy the
    following test w/o the memory fence in beween lines 1021 and 1022, and it
    will fail on first iterations:
    https://code.google.com/p/go/source/browse/src/pkg/sync/atomic/atomic_test.go#1005



    See

    http://bartoszmilewski.com/2008/11/05/who-ordered-memory-fences-on-an-x86/

    This is less true for ARM CPUs. (If somebody puts Go
    on Itanium or SPARC CPUs, it gets much more complex. See

    http://h21007.www2.hp.com/portal/download/files/unprot/ddk/mem_ordering_pa_ia.pdf
    for what life is like on
    a "relaxed memory" multiprocessor.) This is going to become
    a bigger problem as more relaxed-memory model multiprocessors
    come into use. Like the ARM Cortex-A50, shipping this year,
    which has load-acquire and store-release instructions to
    support "volatile" variables.

    You can presumably rely on any blocking event generating
    a memory fence. One would expect that mutex unlocks would generate
    a memory fence. (Is this documented somewhere?) It came up last June
    that a close event on a channel was not a synchronizing operation,
    although sending and receiving were. So using a channel close
    as a Done flag was unsafe in older versions of Go.
    See "http://comments.gmane.org/gmane.comp.lang.go.general/62107"

    There are some optimizations, like keeping a variable
    in a register during an inner loop, which totally break
    attempts at unlocked synchronization. Spinning on a
    Boolean is a classic example. Go, like C, is fast enough and
    optimized enough that you can hit these problems.

    This is why it's better to send data across channels than
    to share it between goroutines. Then you don't have to think
    about this stuff.

    John Nagle
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • John Nagle at Feb 5, 2013 at 7:20 am

    On 2/4/2013 9:36 PM, Dmitry Vyukov wrote:

    x86 does not provide guarantee of sequential order across CPUs. Thy the
    following test w/o the memory fence in beween lines 1021 and 1022, and it
    will fail on first iterations:
    https://code.google.com/p/go/source/browse/src/pkg/sync/atomic/atomic_test.go#1005
    Cute. That demonstrates the AMD64/newer X86 rule:

    "Loads may be reordered with older stores to different locations".

    It takes loads and stores from two different locations to force that
    error. For a single memory location, loads and stores are supposed to
    appear to be sequential on X86, even across processors.

    It's nice that you can force that error from Go. Just for fun, here it
    is on the Go playground.

    http://play.golang.org/p/dlHlc2jEv0

    There it doesn't fail, because the Playground is locked down to single
    thread mode. You can set GOMAXPROCS there, but it doesn't seem to do
    anything. (The Playground really should support at least two threads,
    so you can test and demo race conditions.)

    John Nagle

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Feb 5, 2013 at 7:22 am

    On Tue, Feb 5, 2013 at 11:19 AM, John Nagle wrote:
    On 2/4/2013 9:36 PM, Dmitry Vyukov wrote:

    x86 does not provide guarantee of sequential order across CPUs. Thy the
    following test w/o the memory fence in beween lines 1021 and 1022, and it
    will fail on first iterations:
    https://code.google.com/p/go/source/browse/src/pkg/sync/atomic/atomic_test.go#1005
    Cute. That demonstrates the AMD64/newer X86 rule:

    "Loads may be reordered with older stores to different locations".

    This rule was there always and the processors always worked that way.

    It takes loads and stores from two different locations to force that
    error. For a single memory location, loads and stores are supposed to
    appear to be sequential on X86, even across processors.

    It's nice that you can force that error from Go. Just for fun, here it
    is on the Go playground.

    http://play.golang.org/p/dlHlc2jEv0

    There it doesn't fail, because the Playground is locked down to single
    thread mode. You can set GOMAXPROCS there, but it doesn't seem to do
    anything. (The Playground really should support at least two threads,
    so you can test and demo race conditions.)
    Yeah, and hack Go playground server.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dan Kortschak at Feb 5, 2013 at 8:44 am
    See this article for an explanation of why that would be a bad idea:
    http://research.swtch.com/gorace

    ... and here for an example of how an exploit could be constructed (if I knew how to write assembler it would be more convincing):
    https://github.com/zond/gosafe/issues/1
    On 05/02/2013, at 5:50 PM, "John Nagle" wrote:

    The Playground really should support at least two threads,
    so you can test and demo race conditions
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Anthony Martin at Feb 5, 2013 at 5:17 pm

    Dan Kortschak once said:
    ... and here for an example of how an exploit could be
    constructed (if I knew how to write assembler it would
    be more convincing):
    https://github.com/zond/gosafe/issues/1
    Here's some assembly for you:

    // exit(42) for linux-amd64
    var exitcode = []byte{
    // MOV $42, DI
    0xbf, 0x2a, 0x00, 0x00, 0x00,
    // MOV $231, AX
    0xb8, 0xe7, 0x00, 0x00, 0x00,
    // SYSCALL
    0x0f, 0x05,
    // RET
    0xc3,
    }

    Make sure you also do:

    // copy the exit code into the heap
    c := append([]byte(nil), exitcode...)

    in the main function before passing it
    to sliceToFunc since static literals are
    placed in the .rodata section.

    Cheers,
    Anthony

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • John Nagle at Feb 5, 2013 at 5:31 pm

    On 2/5/2013 12:44 AM, Dan Kortschak wrote:
    See this article for an explanation of why that would be a bad idea:
    http://research.swtch.com/gorace

    ... and here for an example of how an exploit could be constructed
    (if I knew how to write assembler it would be more convincing):
    https://github.com/zond/gosafe/issues/1

    On 05/02/2013, at 5:50 PM, "John Nagle"
    wrote:
    The Playground really should support at least two threads, so you
    can test and demo race conditions
    I knew an exploit was possible, but it's interesting to see how
    short one is.

    The Playground and AppEngine would probably have to run each
    process in a jail under SELinux to allow multiple threads, but
    that's worth having. It's embarrassing that OS security is
    still so bad that few servers can safely run a hostile process.
    Cranking up a Google Compute Engine (Google's product that
    competes with AWS) would work, but launching a whole Linux
    instance for one Playground run is a bit much. Would running
    the Playground under the race detector work?

    People are trying to use Go for highly parallel computation.
    The language is good for that. Maybe that wasn't intended,
    but Java wasn't intended to replace COBOL, which it did. The use
    case for Go is when you have crunching to do and many CPUs to do it.
    If all you need is I/O parallelism, Javascript and its callback
    model can get the job done. Without worrying about race conditions.

    If Go has to be locked down to single-thread to be safe, it's
    not really necessary.

    John Nagle



    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Patrick Mylund Nielsen at Feb 5, 2013 at 5:38 pm
    The use case for Go is when you have crunching to do and many CPUs to do
    it. If all you need is I/O parallelism, Javascript and its callback model
    can get the job done. Without worrying about race conditions.

    lol

    On Tue, Feb 5, 2013 at 6:31 PM, John Nagle wrote:
    On 2/5/2013 12:44 AM, Dan Kortschak wrote:
    See this article for an explanation of why that would be a bad idea:
    http://research.swtch.com/gorace

    ... and here for an example of how an exploit could be constructed
    (if I knew how to write assembler it would be more convincing):
    https://github.com/zond/gosafe/issues/1

    On 05/02/2013, at 5:50 PM, "John Nagle"
    wrote:
    The Playground really should support at least two threads, so you
    can test and demo race conditions
    I knew an exploit was possible, but it's interesting to see how
    short one is.

    The Playground and AppEngine would probably have to run each
    process in a jail under SELinux to allow multiple threads, but
    that's worth having. It's embarrassing that OS security is
    still so bad that few servers can safely run a hostile process.
    Cranking up a Google Compute Engine (Google's product that
    competes with AWS) would work, but launching a whole Linux
    instance for one Playground run is a bit much. Would running
    the Playground under the race detector work?

    People are trying to use Go for highly parallel computation.
    The language is good for that. Maybe that wasn't intended,
    but Java wasn't intended to replace COBOL, which it did. The use
    case for Go is when you have crunching to do and many CPUs to do it.
    If all you need is I/O parallelism, Javascript and its callback
    model can get the job done. Without worrying about race conditions.

    If Go has to be locked down to single-thread to be safe, it's
    not really necessary.

    John Nagle



    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Patrick Mylund Nielsen at Feb 5, 2013 at 5:44 pm
    Ah, I missed the "if" there. I thought you were suggesting using Javascript
    for multi-core programming.

    Indeed, but callback spaghetti is bad for many different reasons. Python
    with its GIL is probably a better example.

    On Tue, Feb 5, 2013 at 6:38 PM, Patrick Mylund Nielsen wrote:

    The use case for Go is when you have crunching to do and many CPUs to
    do it. If all you need is I/O parallelism, Javascript and its callback model
    can get the job done. Without worrying about race conditions.

    lol

    On Tue, Feb 5, 2013 at 6:31 PM, John Nagle wrote:
    On 2/5/2013 12:44 AM, Dan Kortschak wrote:
    See this article for an explanation of why that would be a bad idea:
    http://research.swtch.com/gorace

    ... and here for an example of how an exploit could be constructed
    (if I knew how to write assembler it would be more convincing):
    https://github.com/zond/gosafe/issues/1

    On 05/02/2013, at 5:50 PM, "John Nagle"
    wrote:
    The Playground really should support at least two threads, so you
    can test and demo race conditions
    I knew an exploit was possible, but it's interesting to see how
    short one is.

    The Playground and AppEngine would probably have to run each
    process in a jail under SELinux to allow multiple threads, but
    that's worth having. It's embarrassing that OS security is
    still so bad that few servers can safely run a hostile process.
    Cranking up a Google Compute Engine (Google's product that
    competes with AWS) would work, but launching a whole Linux
    instance for one Playground run is a bit much. Would running
    the Playground under the race detector work?

    People are trying to use Go for highly parallel computation.
    The language is good for that. Maybe that wasn't intended,
    but Java wasn't intended to replace COBOL, which it did. The use
    case for Go is when you have crunching to do and many CPUs to do it.
    If all you need is I/O parallelism, Javascript and its callback
    model can get the job done. Without worrying about race conditions.

    If Go has to be locked down to single-thread to be safe, it's
    not really necessary.

    John Nagle



    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • John Nagle at Feb 5, 2013 at 6:38 pm

    On 2/5/2013 9:44 AM, Patrick Mylund Nielsen wrote:
    Ah, I missed the "if" there. I thought you were suggesting using Javascript
    for multi-core programming.

    Indeed, but callback spaghetti is bad for many different reasons. Python
    with its GIL is probably a better example.
    I've been to a talk at Stanford by one of the Javascript architects
    (Steve Yegge, I think) where he argues for callbacks vs. threads, and
    against blocking. He argues that all I/O should be callback-based.

    Mid-level programmers can write complex, parallel programs in
    Javascript and get them to work. Browsers routinely execute
    Javascript that has been cut and pasted from various sources
    (typically ad services and tracking services) which operate
    in the background without too much trouble. On top of that,
    blocking services block some of those ad services and tracking
    services without jamming up the rest of the program.

    Try doing that with a thread, lock, shared data, and channel
    model. Javascript appears as an ugly mess, and it tends to be
    written that way. But there's a certain elegance in the model
    behind it.

    The current options for concurrency on mainstream
    machines seem to be:
    - No concurrency (BASIC)
    - I/O concurrency only (Javascript)
    - Safe message-passing concurrency (Erlang)
    - Safe multiprogramming (Python, Go in single-thread mode)
    - Safe concurrency via monitors (Ada)
    - Unsafe concurrency via user-controlled locking
    (C/C++, Java, Go in multi-thread mode)

    Go makes unsafe concurrency easily accessible to more programmers.
    In multi-thread mode, Go programmers suddenly have to be aware
    of issues like out of order execution, effects of compiler
    reordering, memory fences, and related low-level issues.
    They must at least know what they're supposed to avoid doing.
    Otherwise they get into problems like the one that started this
    thread - coding a spin lock improperly.

    I think they're going to need more help.

    John Nagle



    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Minux at Feb 5, 2013 at 6:44 pm

    On Wed, Feb 6, 2013 at 2:30 AM, John Nagle wrote:

    The current options for concurrency on mainstream
    machines seem to be:
    - No concurrency (BASIC)
    - I/O concurrency only (Javascript)
    - Safe message-passing concurrency (Erlang)
    even with "safe" message-passing concurrency, the program is still
    vulnerable
    to currency related problems like livelock and deadlock.
    There is no panacea for concurrency programming.
    - Safe multiprogramming (Python, Go in single-thread mode)
    - Safe concurrency via monitors (Ada)
    - Unsafe concurrency via user-controlled locking
    (C/C++, Java, Go in multi-thread mode)

    Go makes unsafe concurrency easily accessible to more programmers.
    In multi-thread mode, Go programmers suddenly have to be aware
    of issues like out of order execution, effects of compiler
    reordering, memory fences, and related low-level issues.
    No. For example, the whole Go Memory Model doesn't mention any of the
    concepts, and if the user follows that closely, he can avoid all those
    pitfalls
    without ever knowing the low-level details (they are really irrelevant, and
    that's why Go discourages people from using sync/atomic and stick to high
    level synchronization primitives provided by goroutine/channel and the sync
    package).

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Patrick Mylund Nielsen at Feb 5, 2013 at 7:13 pm
    But callbacks do not somehow make concurrent access to shared variables
    safe, unless you are passing around the state of the world, in which case
    you probably want a functional language, not callback spaghetti in
    JavaScript.

    On Tue, Feb 5, 2013 at 7:30 PM, John Nagle wrote:
    On 2/5/2013 9:44 AM, Patrick Mylund Nielsen wrote:
    Ah, I missed the "if" there. I thought you were suggesting using
    Javascript
    for multi-core programming.

    Indeed, but callback spaghetti is bad for many different reasons. Python
    with its GIL is probably a better example.
    I've been to a talk at Stanford by one of the Javascript architects
    (Steve Yegge, I think) where he argues for callbacks vs. threads, and
    against blocking. He argues that all I/O should be callback-based.

    Mid-level programmers can write complex, parallel programs in
    Javascript and get them to work. Browsers routinely execute
    Javascript that has been cut and pasted from various sources
    (typically ad services and tracking services) which operate
    in the background without too much trouble. On top of that,
    blocking services block some of those ad services and tracking
    services without jamming up the rest of the program.

    Try doing that with a thread, lock, shared data, and channel
    model. Javascript appears as an ugly mess, and it tends to be
    written that way. But there's a certain elegance in the model
    behind it.

    The current options for concurrency on mainstream
    machines seem to be:
    - No concurrency (BASIC)
    - I/O concurrency only (Javascript)
    - Safe message-passing concurrency (Erlang)
    - Safe multiprogramming (Python, Go in single-thread mode)
    - Safe concurrency via monitors (Ada)
    - Unsafe concurrency via user-controlled locking
    (C/C++, Java, Go in multi-thread mode)

    Go makes unsafe concurrency easily accessible to more programmers.
    In multi-thread mode, Go programmers suddenly have to be aware
    of issues like out of order execution, effects of compiler
    reordering, memory fences, and related low-level issues.
    They must at least know what they're supposed to avoid doing.
    Otherwise they get into problems like the one that started this
    thread - coding a spin lock improperly.

    I think they're going to need more help.

    John Nagle



    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Nigel Tao at Feb 5, 2013 at 10:46 pm

    On Wed, Feb 6, 2013 at 5:30 AM, John Nagle wrote:
    Try doing that with a thread, lock, shared data, and channel
    model. Javascript appears as an ugly mess, and it tends to be
    written that way. But there's a certain elegance in the model
    behind it.
    FWIW, there was another post in the golang-nuts list just yesterday
    where a self-described "former node core contributor" much preferred
    Go's goroutines over Javascript's callbacks.

    https://groups.google.com/forum/#!msg/golang-nuts/zeLMYnjO_JA/hdGxVGUwF90J

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Minux at Feb 5, 2013 at 6:12 pm

    On Wed, Feb 6, 2013 at 1:31 AM, John Nagle wrote:
    On 2/5/2013 12:44 AM, Dan Kortschak wrote:
    See this article for an explanation of why that would be a bad idea:
    http://research.swtch.com/gorace

    ... and here for an example of how an exploit could be constructed
    (if I knew how to write assembler it would be more convincing):
    https://github.com/zond/gosafe/issues/1

    On 05/02/2013, at 5:50 PM, "John Nagle"
    wrote:
    The Playground really should support at least two threads, so you
    can test and demo race conditions
    I knew an exploit was possible, but it's interesting to see how
    short one is.

    The Playground and AppEngine would probably have to run each
    process in a jail under SELinux to allow multiple threads, but
    that's worth having. It's embarrassing that OS security is
    still so bad that few servers can safely run a hostile process.
    Cranking up a Google Compute Engine (Google's product that
    competes with AWS) would work, but launching a whole Linux
    instance for one Playground run is a bit much. Would running
    the Playground under the race detector work?

    People are trying to use Go for highly parallel computation.
    The language is good for that. Maybe that wasn't intended,
    but Java wasn't intended to replace COBOL, which it did. The use
    case for Go is when you have crunching to do and many CPUs to do it.
    If all you need is I/O parallelism, Javascript and its callback
    model can get the job done. Without worrying about race conditions.

    If Go has to be locked down to single-thread to be safe, it's
    not really necessary.
    Russ' article has already identified ways to remediate this, and it's
    quite possible that a future Go implementation might be immune to
    that problem.

    Let me stress it once more, the problem is not inherent in Go the
    language, it's just a problem of the current implementation.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • John Nagle at Feb 5, 2013 at 8:41 pm

    On 2/5/2013 10:11 AM, minux wrote:
    On Wed, Feb 6, 2013 at 1:31 AM, John Nagle wrote:
    On 2/5/2013 12:44 AM, Dan Kortschak wrote:
    See this article for an explanation of why that would be a bad idea:
    http://research.swtch.com/gorace

    ... and here for an example of how an exploit could be constructed
    (if I knew how to write assembler it would be more convincing):
    https://github.com/zond/gosafe/issues/1
    Russ' article has already identified ways to remediate this, and it's
    quite possible that a future Go implementation might be immune to
    that problem.
    That just covers the race condition problem with interfaces.
    There's also one with maps. There may be more. (Slices, maybe?)
    Let me stress it once more, the problem is not inherent in Go the
    language, it's just a problem of the current implementation.
    Just a "small matter of programming"...

    This is a hard problem. If the language locks everything,
    there's a performance hit. That's why maps aren't locked and
    shareable. If this were easy to fix, it would have been
    fixed already. Every concurrent system with shared memory
    hits this problem. Go might be within reach of solving it.

    (I just received an invitation for a talk: "Combating Cyber
    Attacks Using a 288-core server", by someone using a
    Tilera machine. Huge numbers of cores are now being used
    in hostile environments. Go could help in that arena.)

    John Nagle


    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Nigel Tao at Feb 5, 2013 at 10:50 pm

    On Wed, Feb 6, 2013 at 7:41 AM, John Nagle wrote:
    On 2/5/2013 10:11 AM, minux wrote:
    Russ' article has already identified ways to remediate this, and it's
    quite possible that a future Go implementation might be immune to
    that problem.
    That just covers the race condition problem with interfaces.
    There's also one with maps. There may be more. (Slices, maybe?)
    http://research.swtch.com/gorace explicitly discusses slices and
    strings, not just interfaces. It also describes the general principle
    that would also apply to maps: "The fix is to make the updates atomic.
    In Go, the easiest way to do that is to make the representation a
    single pointer that points at an immutable structure. When the value
    needs to be updated, you allocate a new structure, fill it in
    completely, and only then change the pointer to point at it."

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Steve wang at Feb 4, 2013 at 7:38 pm

    On Tuesday, February 5, 2013 3:29:58 AM UTC+8, Philipp Schumann wrote:
    Thanks everyone. I agree using WaitGroup would result in much better
    readability.

    I also (think I) fully understand what rog is saying here:
    If you're reading a variable in one goroutine that's being set by another
    and you don't have any synchronisation between them, then your
    program behaviour is undefined.
    BUT just for the sake of this thought-experiment. IF said variable is a
    bool, when reading it, the program can only read true or false. Consider
    exactly this minimal step-by-step scenario:

    - main() sets the bool 'done' to false prior to any go-routine-call.
    main() is the only one ever reading 'done' and the only one ever setting it
    to false
    - then it calls the go-routine, which does stuff in parallel, then
    proceeds to do some of its own stuff, then waits (loops-blocks) until
    reading from 'done' yields true
    - this can only possibly happen after the go-routine did get the chance to
    set it to true prior to returning, which is the only 'done' access the
    go-routine performs

    Now, to be sure, I will most likely switch to a WaitGroup or a done
    channel anyway because of the very good arguments presented previously. But
    just to satisfy a budding Go Geek's technical curiosity... how can the
    above minimal setup ever become 'undefined'?
    As far as I'm concerned, the change of 'done' in one goroutine may not be
    observed by another goroutine.

    As it stands, it seems like a highly undesirable and crude "manual
    sync"... but it does keep in a way a "synchronized busy-or-done state" and
    an effective wait, doesn't it?

    On Monday, February 4, 2013 9:07:51 PM UTC+7, rog wrote:

    On 4 February 2013 13:35, Philipp Schumann <philipp....@gmail.com>
    wrote:
    So this (part of the) app is really simple and this particular goroutine
    usage is also not going to grow more complex really:

    There's a "main goroutine" that loops, in each loop iteration fires off two
    separate goroutines, does some stuff on its own, then waits for the 2
    routines to finish, then next loop iteration.

    The waiting is done with two booleans (say, jobFooDone and jobBarDone) in
    the scope of the main goroutine. This may seem archaic but in this really
    quite simple scenario I don't see how it could go wrong -- can you?

    Each boolean is only ever set to false by the main routine (beginning of the
    loop iteration prior to launching the goroutine), and only ever set to true
    by the job goroutine (just before it returns
    If you're reading a variable in one goroutine that's being set by another
    and you don't have any synchronisation between them, then your
    program behaviour is undefined.

    See http://golang.org/ref/mem

    Also, as Steve Wang says, we can't really tell anything without seeing
    the code in question.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Nate Finch at Feb 4, 2013 at 3:39 pm
    Use sync.Waitgroup. It'll make your code easier for other people to
    understand (since it's the default way to do it in go), and there's really
    no reason not to. It's not slow, and it's probably a lot safer than a
    hand-rolled solution.

    If I saw a hand rolled solution, I'd have to very very carefully read all
    the code to make sure the hand rolled one was doing the right thing, and
    that's a lot of time and effort, when you could just use sync.WaitGroup,
    and I'd know it was doing the right thing (and you'd know it was doing the
    right thing).

    On Monday, February 4, 2013 8:35:09 AM UTC-5, Philipp Schumann wrote:

    So this (part of the) app is really simple and this particular goroutine
    usage is also not going to grow more complex really:

    There's a "main goroutine" that loops, in each loop iteration fires off
    two separate goroutines, does some stuff on its own, then waits for the 2
    routines to finish, then next loop iteration.

    The waiting is done with two booleans (say, jobFooDone and jobBarDone) in
    the scope of the main goroutine. This may seem archaic but in this really
    quite simple scenario I don't see how it could go wrong -- can you?

    Each boolean is only ever set to false by the main routine (beginning of
    the loop iteration prior to launching the goroutine), and only ever set to
    true by the job goroutine (just before it returns -- btw would be great if
    we could defer non-func-call statements). Hardly a real pressing "clean &
    proper" syncing need here, or is there? (Not talking about communicating
    other data between the goroutines right here, just the waiting part that
    WaitGroup would also address.)

    I can see in waitgroup.go all sorts of atomic operations being done and I
    can see how that will be very very handy when the number of goroutines
    being launched is dynamic/unknown/greater than 2 or 3. But for my simple
    case, would there be any benefit or indeed necessity to switch to WaitGroup
    from my two simple boolean vars?
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedFeb 4, '13 at 1:35p
activeFeb 5, '13 at 10:50p
posts31
users14
websitegolang.org

People

Translate

site design / logo © 2021 Grokbase