FAQ
I can roll my own... but does anyone else think a limited workgroup would
be useful. i.e. block on Add if the workgroup has reached a size limit?
This can be useful when for instance each thread in a pool generates a lot
of in data memory; and you know you are going to start running up against
hardware limits.

Curious to know how others have handled similar scenarios...

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Search Discussions

  • Dave Cheney at Mar 6, 2013 at 3:53 pm
    Buffered channels can be used effectively for this.
    On Thu, Mar 7, 2013 at 2:50 AM, matt wrote:
    I can roll my own... but does anyone else think a limited workgroup would be
    useful. i.e. block on Add if the workgroup has reached a size limit? This
    can be useful when for instance each thread in a pool generates a lot of in
    data memory; and you know you are going to start running up against hardware
    limits.

    Curious to know how others have handled similar scenarios...

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Matt Kane's Brain at Mar 6, 2013 at 3:59 pm
    This is something buffered channels are good for.

    sem := make(chan struct{}, N)
    for {
       go func() {
         sem <- struct{}{}
         // do something
         <- sem
       }()
    }

    On Wed, Mar 6, 2013 at 10:50 AM, matt wrote:
    I can roll my own... but does anyone else think a limited workgroup would be
    useful. i.e. block on Add if the workgroup has reached a size limit? This
    can be useful when for instance each thread in a pool generates a lot of in
    data memory; and you know you are going to start running up against hardware
    limits.

    Curious to know how others have handled similar scenarios...

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.


    --
    matt kane's brain
    twitter: the_real_mkb / nynexrepublic
    http://hydrogenproject.com

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Kyle Lemons at Mar 6, 2013 at 7:06 pm
    It was pointed out to me that this is actually not correct w.r.t the memory
    model; you want

    sem := make(chan struct{}, N)
    for i := 0; i < N; i++ {
       sem <- struct{}{}
    }
    for {
       go func() {
         <-sem
         ...
         sem <- struct{}{}
       }()
    }

    the reasons are very technical and it will probably work in practice, but
    yeah.

    On Wed, Mar 6, 2013 at 7:59 AM, Matt Kane's Brain wrote:

    This is something buffered channels are good for.

    sem := make(chan struct{}, N)
    for {
    go func() {
    sem <- struct{}{}
    // do something
    <- sem
    }()
    }

    On Wed, Mar 6, 2013 at 10:50 AM, matt wrote:
    I can roll my own... but does anyone else think a limited workgroup would be
    useful. i.e. block on Add if the workgroup has reached a size limit? This
    can be useful when for instance each thread in a pool generates a lot of in
    data memory; and you know you are going to start running up against hardware
    limits.

    Curious to know how others have handled similar scenarios...

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.


    --
    matt kane's brain
    twitter: the_real_mkb / nynexrepublic
    http://hydrogenproject.com

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Minux at Mar 6, 2013 at 7:17 pm

    On Thu, Mar 7, 2013 at 3:06 AM, Kyle Lemons wrote:
    It was pointed out to me that this is actually not correct w.r.t the memory
    model; you want

    sem := make(chan struct{}, N)
    for i := 0; i < N; i++ {
    sem <- struct{}{}
    }
    for {
    go func() {
    <-sem
    ...
    sem <- struct{}{}
    }()
    }

    the reasons are very technical and it will probably work in practice, but
    yeah.
    Right. I remembered that this is discussed once on this mailing list,
    however, i couldn't find the relevant post. had anyone bookmarked it?
    thanks.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Kyle Lemons at Mar 6, 2013 at 7:23 pm
    Yeah, I searched both my gmail and groups and can't seem to find it...

    On Wed, Mar 6, 2013 at 11:17 AM, minux wrote:
    On Thu, Mar 7, 2013 at 3:06 AM, Kyle Lemons wrote:
    It was pointed out to me that this is actually not correct w.r.t the memory
    model; you want

    sem := make(chan struct{}, N)
    for i := 0; i < N; i++ {
    sem <- struct{}{}
    }
    for {
    go func() {
    <-sem
    ...
    sem <- struct{}{}
    }()
    }

    the reasons are very technical and it will probably work in practice, but
    yeah.
    Right. I remembered that this is discussed once on this mailing list,
    however, i couldn't find the relevant post. had anyone bookmarked it?
    thanks.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Gordon Klaus at Mar 6, 2013 at 8:07 pm
    That's a curious little hole in the memory model. It says
    A receive from an unbuffered channel happens before the send on that
    channel completes.
    but I'd like it to say
    A receive from an unbuffered (or buffered, but full) channel happens before
    the send on that channel completes.

    Now I'm curious to read the discussion you mentioned.

    On Wednesday, March 6, 2013 2:22:38 PM UTC-5, Kyle Lemons wrote:

    Yeah, I searched both my gmail and groups and can't seem to find it...


    On Wed, Mar 6, 2013 at 11:17 AM, minux <minu...@gmail.com <javascript:>>wrote:
    On Thu, Mar 7, 2013 at 3:06 AM, Kyle Lemons <kev...@google.com<javascript:>>
    wrote:
    It was pointed out to me that this is actually not correct w.r.t the memory
    model; you want

    sem := make(chan struct{}, N)
    for i := 0; i < N; i++ {
    sem <- struct{}{}
    }
    for {
    go func() {
    <-sem
    ...
    sem <- struct{}{}
    }()
    }

    the reasons are very technical and it will probably work in practice, but
    yeah.
    Right. I remembered that this is discussed once on this mailing list,
    however, i couldn't find the relevant post. had anyone bookmarked it?
    thanks.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Bryanturley at Mar 6, 2013 at 8:38 pm

    On Wednesday, March 6, 2013 2:07:46 PM UTC-6, gordon...@gmail.com wrote:
    That's a curious little hole in the memory model. It says
    A receive from an unbuffered channel happens before the send on that
    channel completes.
    but I'd like it to say
    A receive from an unbuffered (or buffered, but full) channel happens
    before the send on that channel completes.
    Well if it receives on a full buffered channel it will be from the front,
    and your sent data will be at the end (FIFO).
    I wouldn't consider that the same as an unbuffered channel since the
    received data is not the same as the sent data.
    Perhaps in the case of make(chan T, 1) ?

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Kyle Lemons at Mar 6, 2013 at 10:11 pm
    I found the discussion, it was on golang-dev *sigh*

    https://groups.google.com/d/msg/golang-dev/ShqsqvCzkWg/Kg30VPN4QmUJ

    If you want your brain to melt a little, read the whole thread from top to
    bottom. :)

    On Wed, Mar 6, 2013 at 12:38 PM, bryanturley wrote:
    On Wednesday, March 6, 2013 2:07:46 PM UTC-6, gordon...@gmail.com wrote:

    That's a curious little hole in the memory model. It says
    A receive from an unbuffered channel happens before the send on that
    channel completes.
    but I'd like it to say
    A receive from an unbuffered (or buffered, but full) channel happens
    before the send on that channel completes.
    Well if it receives on a full buffered channel it will be from the front,
    and your sent data will be at the end (FIFO).
    I wouldn't consider that the same as an unbuffered channel since the
    received data is not the same as the sent data.
    Perhaps in the case of make(chan T, 1) ?

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Steven Blenkinsop at Mar 7, 2013 at 12:15 am

    On Wednesday, 6 March 2013, Kyle Lemons wrote:

    I found the discussion, it was on golang-dev *sigh*

    https://groups.google.com/d/msg/golang-dev/ShqsqvCzkWg/Kg30VPN4QmUJ

    If you want your brain to melt a little, read the whole thread from top to
    bottom. :)
    That conversation didn't so much melt my brain as break my heart :(.

    The required rule is basically that, given a channel of buffer capacity N
    = 0, the (M-N)th receive happens before the Mth send completes for all M >
    N. This rule is already true for N == 0, but not otherwise.

    The thing is, this ordering is required in order for "send on a buffered
    channel will block until there is room in the buffer" to have any meaning.
    If the Mth send can happen before the (M-N)th receive, then the buffer
    size is greater than its capacity. Which means you have to assume that
    buffer capacity is meaningless in order to be able to do the reordering
    Dmitry does in that thread :\. It's not as surprising that buffer
    capacity ends up being irrelevant later on in his analysis when you realize
    that he implicitly made this assumption up front.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Kyle Lemons at Mar 7, 2013 at 12:25 am

    On Wed, Mar 6, 2013 at 4:15 PM, Steven Blenkinsop wrote:
    On Wednesday, 6 March 2013, Kyle Lemons wrote:

    I found the discussion, it was on golang-dev *sigh*

    https://groups.google.com/d/msg/golang-dev/ShqsqvCzkWg/Kg30VPN4QmUJ

    If you want your brain to melt a little, read the whole thread from top
    to bottom. :)
    That conversation didn't so much melt my brain as break my heart :(.

    The required rule is basically that, given a channel of buffer capacity N
    = 0, the (M-N)th receive happens before the Mth send completes for all M >
    N. This rule is already true for N == 0, but not otherwise.
    I think that's covered under the slightly-easier-to-understand rule:
      Each send on a particular channel is matched to a corresponding receive
    from that channel, usually in a different goroutine.

    ;-)

    The thing is, this ordering is required in order for "send on a buffered
    channel will block until there is room in the buffer" to have any meaning.
    I don't know, I actually think the blocking itself can be expressed in
    terms of the above rule. "At most N (where N is the buffer size) sends on
    a channel can happen before the corresponding send completes." or something

    If the Mth send can happen before the (M-N)th receive, then the buffer
    size is greater than its capacity. Which means you have to assume that
    buffer capacity is meaningless in order to be able to do the reordering
    Dmitry does in that thread :\. It's not as surprising that buffer
    capacity ends up being irrelevant later on in his analysis when you realize
    that he implicitly made this assumption up front.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Kyle Lemons at Mar 7, 2013 at 2:35 am

    On Wed, Mar 6, 2013 at 4:46 PM, Steven Blenkinsop wrote:
    On Wednesday, 6 March 2013, Kyle Lemons wrote:


    I think that's covered under the slightly-easier-to-understand rule:
    Each send on a particular channel is matched to a corresponding receive
    from that channel, usually in a different goroutine.

    ;-)
    No, because the Mth send is matched to the Mth receive, and the Mth send
    happens before the Mth receive, and in the case of unbufferred channels,
    the Mth receive happens before the Mth send completes (i.e. N == 0). The N
    == 0 case is already covered, yes, which was my point, since the N == 0
    rule can be considered a special case of the N >= 0 rule I stated above
    which doesn't exist in the memory model.
    On Wed, Mar 6, 2013 at 5:15 PM, Steven Blenkinsop wrote:

    Was going off list intentional?
    Apparently not. It wasn't me! ;-)

    On Wednesday, 6 March 2013, Kyle Lemons wrote:


    Oh, I think what you're stating is not actually the case.
    Yes. I'm stating a rule that doesn't exist in the memory model. I'm
    stating it because it's the rule that would be required in order to be able
    to guarantee that Matt's semaphore will work.

    The only happens before relationship applies to the value. If a value
    goes into a channel and then comes out of a channel, the operation that put
    it in happens before the operation that takes it out.
    The other happens before relationship is:
    *A receive from an unbuffered channel happens before the
    [corresponding] send on that channel completes.*
    *
    *
    This rule exists so that unbufferred channels are actually unbufferred. If
    a send completed before its corresponding receive happened, then the value
    sent on it would be buffered, so the memory model disallows that.

    My point was that the same logic applies to buffered channels as well. If
    the Mth send completes before the (M-N)th receive happens, where N is the
    buffer capacity, then there are N+1 buffered values, which would exceed the
    buffer capacity. If the spec requires that sends to buffered channels on
    conforming implementations block until there is room in the buffer for the
    value, then the (M-N)th receive has to happen before the Mth send
    completes. This rule is not stated in the memory model, however.
    I feel like the correct behavior here is covered already by the memory
    model and the spec when considered together, but I'll defer to the experts.
      I don't think buffering a channel up to its max capacity makes it work as
    a rendezvous, which I think is what that particular rule is codifying, but
    I might be wrong (as I was a number of separate times in the thread linked
    above).

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Bryanturley at Mar 7, 2013 at 12:26 am

    On Wednesday, March 6, 2013 6:15:10 PM UTC-6, Steven Blenkinsop wrote:
    On Wednesday, 6 March 2013, Kyle Lemons wrote:

    I found the discussion, it was on golang-dev *sigh*

    https://groups.google.com/d/msg/golang-dev/ShqsqvCzkWg/Kg30VPN4QmUJ

    If you want your brain to melt a little, read the whole thread from top
    to bottom. :)
    That conversation didn't so much melt my brain as break my heart :(.

    The required rule is basically that, given a channel of buffer capacity N
    = 0, the (M-N)th receive happens before the Mth send completes for all M >
    N. This rule is already true for N == 0, but not otherwise.

    The thing is, this ordering is required in order for "send on a buffered
    channel will block until there is room in the buffer" to have any meaning.
    If the Mth send can happen before the (M-N)th receive, then the buffer
    size is greater than its capacity. Which means you have to assume that
    buffer capacity is meaningless in order to be able to do the reordering
    Dmitry does in that thread :\. It's not as surprising that buffer
    capacity ends up being irrelevant later on in his analysis when you realize
    that he implicitly made this assumption up front.
    This is only relevant in the case of receiver and sender in the same
    goroutine though, correct?
    To overcome this you just need to be passing some kind of token around,
    instead of just waiting for the operations to complete.
    Or I am missing something ;)

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Kyle Lemons at Mar 7, 2013 at 12:30 am
    On Wed, Mar 6, 2013 at 4:25 PM, bryanturley wrote:
    On Wednesday, March 6, 2013 6:15:10 PM UTC-6, Steven Blenkinsop wrote:
    On Wednesday, 6 March 2013, Kyle Lemons wrote:

    I found the discussion, it was on golang-dev *sigh*

    https://groups.google.com/d/**msg/golang-dev/ShqsqvCzkWg/**Kg30VPN4QmUJ<https://groups.google.com/d/msg/golang-dev/ShqsqvCzkWg/Kg30VPN4QmUJ>

    If you want your brain to melt a little, read the whole thread from top
    to bottom. :)
    That conversation didn't so much melt my brain as break my heart :(.

    The required rule is basically that, given a channel of buffer capacity N
    = 0, the (M-N)th receive happens before the Mth send completes for all M >
    N. This rule is already true for N == 0, but not otherwise.

    The thing is, this ordering is required in order for "send on a buffered
    channel will block until there is room in the buffer" to have any meaning.
    If the Mth send can happen before the (M-N)th receive, then the buffer
    size is greater than its capacity. Which means you have to assume that
    buffer capacity is meaningless in order to be able to do the reordering
    Dmitry does in that thread :\. It's not as surprising that buffer
    capacity ends up being irrelevant later on in his analysis when you realize
    that he implicitly made this assumption up front.
    This is only relevant in the case of receiver and sender in the same
    goroutine though, correct?
    Sends and receives in the same goroutine are subject to the following:
    *Within a single goroutine, the happens-before order is the order expressed
    by the program.*

    so I think that's already well-defined.

    To overcome this you just need to be passing some kind of token around,
    instead of just waiting for the operations to complete.
    Or I am missing something ;)
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Tw at Mar 8, 2013 at 1:29 am
    After reading this thread, I don't know what I understand are really
    right. Following is my understand:

    In http://play.golang.org/p/rMpnZRtCwt, when channel is full, the
    Unlock() can't be guaranteed to happen before the subsequent Lock(),
    because Unlock() is reading from buffer channel while Lock() is sending
    on buffer channel.

    But in http://play.golang.org/p/WiGCDr_vME, the things are different.
    Unlock() is sending on buffer channel, Lock() is reading from buffer
    channel, and Unlock() is guaranteed to happen before the Lock().

    In http://play.golang.org/p/rMpnZRtCwt, the mutex can't limit the
    concurrent number as expect, because
    On 03/07/2013 06:10 AM, Kyle Lemons wrote:
    I found the discussion, it was on golang-dev *sigh*

    https://groups.google.com/d/msg/golang-dev/ShqsqvCzkWg/Kg30VPN4QmUJ

    If you want your brain to melt a little, read the whole thread from
    top to bottom. :)


    On Wed, Mar 6, 2013 at 12:38 PM, bryanturley wrote:

    On Wednesday, March 6, 2013 2:07:46 PM UTC-6, gordon...@gmail.com
    wrote:

    That's a curious little hole in the memory model. It says
    A receive from an unbuffered channel happens before the send
    on that channel completes.
    but I'd like it to say
    A receive from an unbuffered (or buffered, but full) channel
    happens before the send on that channel completes.


    Well if it receives on a full buffered channel it will be from the
    front, and your sent data will be at the end (FIFO).
    I wouldn't consider that the same as an unbuffered channel since
    the received data is not the same as the sent data.
    Perhaps in the case of make(chan T, 1) ?

    --
    You received this message because you are subscribed to the Google
    Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it,
    send an email to golang-nuts+unsubscribe@googlegroups.com
    For more options, visit https://groups.google.com/groups/opt_out.



    --
    You received this message because you are subscribed to the Google
    Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Gordon Klaus at Mar 8, 2013 at 1:48 am

    On Thu, Mar 7, 2013 at 8:29 PM, Tw wrote:

    After reading this thread, I don't know what I understand are really
    right. Following is my understand:

    In http://play.golang.org/p/rMpnZRtCwt, when channel is full, the
    Unlock() can't be guaranteed to happen before the subsequent Lock(),
    because Unlock() is reading from buffer channel while Lock() is sending on
    buffer channel.
    It isn't guaranteed by the memory model http://golang.org/ref/mem#tmp_6
    but it is by the spec http://golang.org/ref/spec#Send_statements

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Mar 8, 2013 at 9:01 am

    On Fri, Mar 8, 2013 at 5:29 AM, Tw wrote:
    After reading this thread, I don't know what I understand are really right.
    Following is my understand:

    In http://play.golang.org/p/rMpnZRtCwt, when channel is full, the Unlock()
    can't be guaranteed to happen before the subsequent Lock(), because Unlock()
    is reading from buffer channel while Lock() is sending on buffer channel.

    But in http://play.golang.org/p/WiGCDr_vME, the things are different.
    Unlock() is sending on buffer channel, Lock() is reading from buffer
    channel, and Unlock() is guaranteed to happen before the Lock().

    Yes, that's correct.
    And the race detector agrees as well.


    In http://play.golang.org/p/rMpnZRtCwt, the mutex can't limit the concurrent
    number as expect, because
    On 03/07/2013 06:10 AM, Kyle Lemons wrote:

    I found the discussion, it was on golang-dev *sigh*

    https://groups.google.com/d/msg/golang-dev/ShqsqvCzkWg/Kg30VPN4QmUJ

    If you want your brain to melt a little, read the whole thread from top to
    bottom. :)

    On Wed, Mar 6, 2013 at 12:38 PM, bryanturley wrote:
    On Wednesday, March 6, 2013 2:07:46 PM UTC-6, gordon...@gmail.com wrote:

    That's a curious little hole in the memory model. It says
    A receive from an unbuffered channel happens before the send on that
    channel completes.
    but I'd like it to say
    A receive from an unbuffered (or buffered, but full) channel happens
    before the send on that channel completes.
    Well if it receives on a full buffered channel it will be from the front,
    and your sent data will be at the end (FIFO).
    I wouldn't consider that the same as an unbuffered channel since the
    received data is not the same as the sent data.
    Perhaps in the case of make(chan T, 1) ?

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.




    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Tw at Mar 8, 2013 at 1:29 am
    After reading this thread, I don't know what I understand are really
    right. Following is my understand:

    In http://play.golang.org/p/rMpnZRtCwt, when channel is full, the
    Unlock() can't be guaranteed to happen before the subsequent Lock(),
    because Unlock() is reading from buffer channel while Lock() is sending
    on buffer channel.

    But in http://play.golang.org/p/WiGCDr_vME, the things are different.
    Unlock() is sending on buffer channel, Lock() is reading from buffer
    channel, and Unlock() is guaranteed to happen before the Lock().
    On 03/07/2013 06:10 AM, Kyle Lemons wrote:
    I found the discussion, it was on golang-dev *sigh*

    https://groups.google.com/d/msg/golang-dev/ShqsqvCzkWg/Kg30VPN4QmUJ

    If you want your brain to melt a little, read the whole thread from
    top to bottom. :)


    On Wed, Mar 6, 2013 at 12:38 PM, bryanturley wrote:

    On Wednesday, March 6, 2013 2:07:46 PM UTC-6, gordon...@gmail.com
    wrote:

    That's a curious little hole in the memory model. It says
    A receive from an unbuffered channel happens before the send
    on that channel completes.
    but I'd like it to say
    A receive from an unbuffered (or buffered, but full) channel
    happens before the send on that channel completes.


    Well if it receives on a full buffered channel it will be from the
    front, and your sent data will be at the end (FIFO).
    I wouldn't consider that the same as an unbuffered channel since
    the received data is not the same as the sent data.
    Perhaps in the case of make(chan T, 1) ?

    --
    You received this message because you are subscribed to the Google
    Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it,
    send an email to golang-nuts+unsubscribe@googlegroups.com
    For more options, visit https://groups.google.com/groups/opt_out.



    --
    You received this message because you are subscribed to the Google
    Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Gordon Klaus at Mar 7, 2013 at 5:12 pm

    On Wed, Mar 6, 2013 at 3:38 PM, bryanturley wrote:
    On Wednesday, March 6, 2013 2:07:46 PM UTC-6, gordon...@gmail.com wrote:

    That's a curious little hole in the memory model. It says
    A receive from an unbuffered channel happens before the send on that
    channel completes.
    but I'd like it to say
    A receive from an unbuffered (or buffered, but full) channel happens
    before the send on that channel completes.
    Well if it receives on a full buffered channel it will be from the front,
    and your sent data will be at the end (FIFO).
    I wouldn't consider that the same as an unbuffered channel since the
    received data is not the same as the sent data.
    Perhaps in the case of make(chan T, 1) ?

    Yes, I see what you mean. Even in the case of make(chan T, 1), the
    receive gets the buffered value while the send pushes a different value.
    I was thinking in terms of operations associated via synchronization, not
    necessarily by data. Russ writes (in the linked thread) as if the data
    part were essential: "... moving one value out of the buffer to make room
    for a different value... The two are, strictly speaking, unrelated
    operations." My question is: what need is there for such a strict sense?

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Kyle Lemons at Mar 7, 2013 at 11:54 pm

    On Thu, Mar 7, 2013 at 9:12 AM, Gordon Klaus wrote:
    On Wed, Mar 6, 2013 at 3:38 PM, bryanturley wrote:
    On Wednesday, March 6, 2013 2:07:46 PM UTC-6, gordon...@gmail.com wrote:

    That's a curious little hole in the memory model. It says
    A receive from an unbuffered channel happens before the send on that
    channel completes.
    but I'd like it to say
    A receive from an unbuffered (or buffered, but full) channel happens
    before the send on that channel completes.
    Well if it receives on a full buffered channel it will be from the front,
    and your sent data will be at the end (FIFO).
    I wouldn't consider that the same as an unbuffered channel since the
    received data is not the same as the sent data.
    Perhaps in the case of make(chan T, 1) ?

    Yes, I see what you mean. Even in the case of make(chan T, 1), the
    receive gets the buffered value while the send pushes a different value.
    I was thinking in terms of operations associated via synchronization, not
    necessarily by data. Russ writes (in the linked thread) as if the data
    part were essential: "... moving one value out of the buffer to make
    room for a different value... The two are, strictly speaking, unrelated
    operations." My question is: what need is there for such a strict sense?
    If I understand it correctly, the relaxed requirements might enable
    optimizations during removal of elements from a buffered channel. In other
    words, in a tight loop you could pull all of the elements off the channel
    before blocking on the read, and only at that point would you block and the
    runtime go look for a sender waiting to fill it up. (the current behavior
    is similar to this).
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Gordon Klaus at Mar 8, 2013 at 1:38 am
    That makes sense. Thanks for the explanation. I guess what Russ meant was
    that the two operations (reading from and writing to a full buffered
    channel) are unrelated insofar as the purpose of channels is concerned:
      Channels are for communicating data; they also provide synchronization,
    but only enough to enable this communication.

    As you pointed out, the spec does in fact ensure that the write to a full
    channel will not proceed before a read makes space for it. So the proposed
    WaitGroup limiting should work in any correct implementation of the
    language.

    Sure enough, the optimization you described is present.
    http://play.golang.org/p/78ycD3sPZj On the playground, it's totally
    aggressive, completely emptying the channel before accepting any more
    values. Interestingly, the same code on my machine and the same version
    (1.0.3) is not as aggressive, in fact sometimes it perfectly interleaves
    the sends and receives.

    On Thu, Mar 7, 2013 at 6:53 PM, Kyle Lemons wrote:
    On Thu, Mar 7, 2013 at 9:12 AM, Gordon Klaus wrote:
    On Wed, Mar 6, 2013 at 3:38 PM, bryanturley wrote:
    On Wednesday, March 6, 2013 2:07:46 PM UTC-6, gordon...@gmail.com wrote:

    That's a curious little hole in the memory model. It says
    A receive from an unbuffered channel happens before the send on that
    channel completes.
    but I'd like it to say
    A receive from an unbuffered (or buffered, but full) channel happens
    before the send on that channel completes.
    Well if it receives on a full buffered channel it will be from the
    front, and your sent data will be at the end (FIFO).
    I wouldn't consider that the same as an unbuffered channel since the
    received data is not the same as the sent data.
    Perhaps in the case of make(chan T, 1) ?

    Yes, I see what you mean. Even in the case of make(chan T, 1), the
    receive gets the buffered value while the send pushes a different value.
    I was thinking in terms of operations associated via synchronization,
    not necessarily by data. Russ writes (in the linked thread) as if the data
    part were essential: "... moving one value out of the buffer to make
    room for a different value... The two are, strictly speaking, unrelated
    operations." My question is: what need is there for such a strict sense?
    If I understand it correctly, the relaxed requirements might enable
    optimizations during removal of elements from a buffered channel. In other
    words, in a tight loop you could pull all of the elements off the channel
    before blocking on the read, and only at that point would you block and the
    runtime go look for a sender waiting to fill it up. (the current behavior
    is similar to this).
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dan Kortschak at Mar 8, 2013 at 2:08 am

    On Thu, 2013-03-07 at 20:38 -0500, Gordon Klaus wrote:
    Sure enough, the optimization you described is present.
    http://play.golang.org/p/78ycD3sPZj On the playground, it's totally
    aggressive, completely emptying the channel before accepting any more
    values. Interestingly, the same code on my machine and the same
    version
    (1.0.3) is not as aggressive, in fact sometimes it perfectly
    interleaves
    the sends and receives.
    Calling runtime.Gosched() gets that behaviour.

    http://play.golang.org/p/r0YNGZzlIk

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Mar 8, 2013 at 9:05 am

    On Fri, Mar 8, 2013 at 5:38 AM, Gordon Klaus wrote:
    That makes sense. Thanks for the explanation. I guess what Russ meant was
    that the two operations (reading from and writing to a full buffered
    channel) are unrelated insofar as the purpose of channels is concerned:
    Channels are for communicating data; they also provide synchronization, but
    only enough to enable this communication.

    As you pointed out, the spec does in fact ensure that the write to a full
    channel will not proceed before a read makes space for it. So the proposed
    WaitGroup limiting should work in any correct implementation of the
    language.

    No, that's not true. It's perfectly fine to allow any code to hoist
    above chan send, or to sink below chan receive. So, what you are
    saying regarding chans is true, but it won't limit number of
    concurrently executing goroutines in any way.


    Sure enough, the optimization you described is present.
    http://play.golang.org/p/78ycD3sPZj On the playground, it's totally
    aggressive, completely emptying the channel before accepting any more
    values. Interestingly, the same code on my machine and the same version
    (1.0.3) is not as aggressive, in fact sometimes it perfectly interleaves the
    sends and receives.

    On Thu, Mar 7, 2013 at 6:53 PM, Kyle Lemons wrote:

    On Thu, Mar 7, 2013 at 9:12 AM, Gordon Klaus <gordon.klaus@gmail.com>
    wrote:
    On Wed, Mar 6, 2013 at 3:38 PM, bryanturley <bryanturley@gmail.com>
    wrote:
    On Wednesday, March 6, 2013 2:07:46 PM UTC-6, gordon...@gmail.com wrote:

    That's a curious little hole in the memory model. It says
    A receive from an unbuffered channel happens before the send on that
    channel completes.
    but I'd like it to say
    A receive from an unbuffered (or buffered, but full) channel happens
    before the send on that channel completes.
    Well if it receives on a full buffered channel it will be from the
    front, and your sent data will be at the end (FIFO).
    I wouldn't consider that the same as an unbuffered channel since the
    received data is not the same as the sent data.
    Perhaps in the case of make(chan T, 1) ?
    Yes, I see what you mean. Even in the case of make(chan T, 1), the
    receive gets the buffered value while the send pushes a different value.
    I was thinking in terms of operations associated via synchronization, not
    necessarily by data. Russ writes (in the linked thread) as if the data part
    were essential: "... moving one value out of the buffer to make room for a
    different value... The two are, strictly speaking, unrelated operations."
    My question is: what need is there for such a strict sense?

    If I understand it correctly, the relaxed requirements might enable
    optimizations during removal of elements from a buffered channel. In other
    words, in a tight loop you could pull all of the elements off the channel
    before blocking on the read, and only at that point would you block and the
    runtime go look for a sender waiting to fill it up. (the current behavior
    is similar to this).
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Matt at Mar 8, 2013 at 10:31 am
    So my solution was less elegant than the proposed buffered channel. I limit
    the go routines at dispatch, i.e. my dispatch loop calls WaitGroup.Wait()
    after say 8 routines are launched. I say it's in-elegant, because what
    happens is you launch 8, wait for 8 to complete, launch another 8, wait for
    them to complete etc.

    I think WaitGroup could nicely encapsulate a limiter so that you could
    write could such as:

    wg := sync.WaitGroup{}

    for i := 0; i < allWork; i++ {
         wg.Add(i)

    On Friday, 8 March 2013 09:05:00 UTC, Dmitry Vyukov wrote:
    On Fri, Mar 8, 2013 at 5:38 AM, Gordon Klaus wrote:
    That makes sense. Thanks for the explanation. I guess what Russ meant was
    that the two operations (reading from and writing to a full buffered
    channel) are unrelated insofar as the purpose of channels is concerned:
    Channels are for communicating data; they also provide synchronization, but
    only enough to enable this communication.

    As you pointed out, the spec does in fact ensure that the write to a full
    channel will not proceed before a read makes space for it. So the proposed
    WaitGroup limiting should work in any correct implementation of the
    language.

    No, that's not true. It's perfectly fine to allow any code to hoist
    above chan send, or to sink below chan receive. So, what you are
    saying regarding chans is true, but it won't limit number of
    concurrently executing goroutines in any way.


    Sure enough, the optimization you described is present.
    http://play.golang.org/p/78ycD3sPZj On the playground, it's totally
    aggressive, completely emptying the channel before accepting any more
    values. Interestingly, the same code on my machine and the same version
    (1.0.3) is not as aggressive, in fact sometimes it perfectly interleaves the
    sends and receives.

    On Thu, Mar 7, 2013 at 6:53 PM, Kyle Lemons wrote:

    On Thu, Mar 7, 2013 at 9:12 AM, Gordon Klaus <gordon...@gmail.com<javascript:>>
    wrote:
    On Wed, Mar 6, 2013 at 3:38 PM, bryanturley <bryan...@gmail.com<javascript:>>
    wrote:
    On Wednesday, March 6, 2013 2:07:46 PM UTC-6, gordon...@gmail.comwrote:
    That's a curious little hole in the memory model. It says
    A receive from an unbuffered channel happens before the send on that
    channel completes.
    but I'd like it to say
    A receive from an unbuffered (or buffered, but full) channel happens
    before the send on that channel completes.
    Well if it receives on a full buffered channel it will be from the
    front, and your sent data will be at the end (FIFO).
    I wouldn't consider that the same as an unbuffered channel since the
    received data is not the same as the sent data.
    Perhaps in the case of make(chan T, 1) ?
    Yes, I see what you mean. Even in the case of make(chan T, 1), the
    receive gets the buffered value while the send pushes a different
    value.
    I was thinking in terms of operations associated via synchronization,
    not
    necessarily by data. Russ writes (in the linked thread) as if the
    data part
    were essential: "... moving one value out of the buffer to make room
    for a
    different value... The two are, strictly speaking, unrelated
    operations."
    My question is: what need is there for such a strict sense?

    If I understand it correctly, the relaxed requirements might enable
    optimizations during removal of elements from a buffered channel. In
    other
    words, in a tight loop you could pull all of the elements off the
    channel
    before blocking on the read, and only at that point would you block and
    the
    runtime go look for a sender waiting to fill it up. (the current
    behavior
    is similar to this).
    --
    You received this message because you are subscribed to the Google
    Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an
    email to golang-nuts...@googlegroups.com <javascript:>.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts...@googlegroups.com <javascript:>.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Matt at Mar 8, 2013 at 10:33 am
    So my solution was less elegant than the proposed buffered channel. I limit
    the go routines at dispatch, i.e. my dispatch loop calls WaitGroup.Wait()
    after say 8 routines are launched. I say it's in-elegant, because what
    happens is you launch 8, wait for 8 to complete, launch another 8, wait for
    them to complete etc.

    I think WaitGroup could nicely encapsulate a limiter so that you could
    write could such as:

    wg := sync.LimitWaitGroup{limit: 8}
    for i := 0; i < allWork; i++ {
         wg.Add(i) // will block if count == 8
         go doWork()
    }
    wg.Done() // blocks until count == 0
    On Friday, 8 March 2013 09:05:00 UTC, Dmitry Vyukov wrote:
    On Fri, Mar 8, 2013 at 5:38 AM, Gordon Klaus wrote:
    That makes sense. Thanks for the explanation. I guess what Russ meant was
    that the two operations (reading from and writing to a full buffered
    channel) are unrelated insofar as the purpose of channels is concerned:
    Channels are for communicating data; they also provide synchronization, but
    only enough to enable this communication.

    As you pointed out, the spec does in fact ensure that the write to a full
    channel will not proceed before a read makes space for it. So the proposed
    WaitGroup limiting should work in any correct implementation of the
    language.

    No, that's not true. It's perfectly fine to allow any code to hoist
    above chan send, or to sink below chan receive. So, what you are
    saying regarding chans is true, but it won't limit number of
    concurrently executing goroutines in any way.


    Sure enough, the optimization you described is present.
    http://play.golang.org/p/78ycD3sPZj On the playground, it's totally
    aggressive, completely emptying the channel before accepting any more
    values. Interestingly, the same code on my machine and the same version
    (1.0.3) is not as aggressive, in fact sometimes it perfectly interleaves the
    sends and receives.

    On Thu, Mar 7, 2013 at 6:53 PM, Kyle Lemons wrote:

    On Thu, Mar 7, 2013 at 9:12 AM, Gordon Klaus <gordon...@gmail.com<javascript:>>
    wrote:
    On Wed, Mar 6, 2013 at 3:38 PM, bryanturley <bryan...@gmail.com<javascript:>>
    wrote:
    On Wednesday, March 6, 2013 2:07:46 PM UTC-6, gordon...@gmail.comwrote:
    That's a curious little hole in the memory model. It says
    A receive from an unbuffered channel happens before the send on that
    channel completes.
    but I'd like it to say
    A receive from an unbuffered (or buffered, but full) channel happens
    before the send on that channel completes.
    Well if it receives on a full buffered channel it will be from the
    front, and your sent data will be at the end (FIFO).
    I wouldn't consider that the same as an unbuffered channel since the
    received data is not the same as the sent data.
    Perhaps in the case of make(chan T, 1) ?
    Yes, I see what you mean. Even in the case of make(chan T, 1), the
    receive gets the buffered value while the send pushes a different
    value.
    I was thinking in terms of operations associated via synchronization,
    not
    necessarily by data. Russ writes (in the linked thread) as if the
    data part
    were essential: "... moving one value out of the buffer to make room
    for a
    different value... The two are, strictly speaking, unrelated
    operations."
    My question is: what need is there for such a strict sense?

    If I understand it correctly, the relaxed requirements might enable
    optimizations during removal of elements from a buffered channel. In
    other
    words, in a tight loop you could pull all of the elements off the
    channel
    before blocking on the read, and only at that point would you block and
    the
    runtime go look for a sender waiting to fill it up. (the current
    behavior
    is similar to this).
    --
    You received this message because you are subscribed to the Google
    Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an
    email to golang-nuts...@googlegroups.com <javascript:>.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts...@googlegroups.com <javascript:>.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Gordon Klaus at Mar 8, 2013 at 5:37 pm

    On Fri, Mar 8, 2013 at 4:05 AM, Dmitry Vyukov wrote:

    No, that's not true. It's perfectly fine to allow any code to hoist
    above chan send, or to sink below chan receive.

    OK, but what about the requirement that the compiler must not change
    observable behavior? The spec places limits what transformations are
    allowed.

    Consider this code (http://play.golang.org/p/OAYLUpNgXr), which uses a chan
    as a mutex:
    s := ""
    c := make(chan int, 1)
    var wg sync.WaitGroup
    wg.Add(2)
    go func() {
    c <- 1
    s += "a"
    <-c
    wg.Done()
    }()
    go func() {
    c <- 1
    s += "b"
    <-c
    wg.Done()
    }()
    wg.Wait()
    fmt.Println(s)
    Your explanation in the other thread suggested that a compiler could
    completely remove the channel. But if we were to observe an output other
    than "ab" or "ba", then we would know that more than one value had been
    buffered on the chan at the same time -- in violation of the spec. So it
    seems that such a transformation should not be allowed in this example.

    So, what you are
    saying regarding chans is true, but it won't limit number of
    concurrently executing goroutines in any way.
    If, in a specific program, the observable effects of those goroutines
    didn't force us to infer a violation of the spec, then I agree that their
    number would be unlimited. But I guess many programs that want to limit
    concurrency will make it clear when they don't.

    I'm sure there are some subtleties I'm missing. I'd be happy if you could
    point them out!

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Mar 8, 2013 at 5:41 pm

    On Fri, Mar 8, 2013 at 9:37 PM, Gordon Klaus wrote:
    On Fri, Mar 8, 2013 at 4:05 AM, Dmitry Vyukov wrote:

    No, that's not true. It's perfectly fine to allow any code to hoist
    above chan send, or to sink below chan receive.

    OK, but what about the requirement that the compiler must not change
    observable behavior? The spec places limits what transformations are
    allowed.

    Consider this code (http://play.golang.org/p/OAYLUpNgXr), which uses a chan
    as a mutex:
    s := ""
    c := make(chan int, 1)
    var wg sync.WaitGroup
    wg.Add(2)
    go func() {
    c <- 1
    s += "a"
    <-c
    wg.Done()
    }()
    go func() {
    c <- 1
    s += "b"
    <-c
    wg.Done()
    }()
    wg.Wait()
    fmt.Println(s)
    Your explanation in the other thread suggested that a compiler could
    completely remove the channel. But if we were to observe an output other
    than "ab" or "ba", then we would know that more than one value had been
    buffered on the chan at the same time -- in violation of the spec. So it
    seems that such a transformation should not be allowed in this example.

    Why do you think that the program is allowed to print only ab or ba?

    This program contains a data race, so the behavior is undefined.


    So, what you are
    saying regarding chans is true, but it won't limit number of
    concurrently executing goroutines in any way.
    If, in a specific program, the observable effects of those goroutines didn't
    force us to infer a violation of the spec, then I agree that their number
    would be unlimited. But I guess many programs that want to limit
    concurrency will make it clear when they don't.

    I'm sure there are some subtleties I'm missing. I'd be happy if you could
    point them out!
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Gordon Klaus at Mar 8, 2013 at 6:06 pm

    On Fri, Mar 8, 2013 at 12:41 PM, Dmitry Vyukov wrote:

    Why do you think that the program is allowed to print only ab or ba?
    Because, as I stated, if it printed something else that would imply a
    violation of the spec. E.g. if it printed "a" or "b" then we would have to
    assume that the statements s += "a" and s += "b" ran concurrently, meaning
    that two values were buffered on the chan which has only capacity for one.
      Where is the flaw in this reasoning?

    This program contains a data race, so the behavior is undefined.
    Does the determination of whether a program has a data race not consider
    whether any potential racing behavior would actually imply a violation of
    the spec?

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Mar 8, 2013 at 6:30 pm

    On Fri, Mar 8, 2013 at 10:06 PM, Gordon Klaus wrote:
    On Fri, Mar 8, 2013 at 12:41 PM, Dmitry Vyukov wrote:

    Why do you think that the program is allowed to print only ab or ba?
    Because, as I stated, if it printed something else that would imply a
    violation of the spec. E.g. if it printed "a" or "b" then we would have to
    assume that the statements s += "a" and s += "b" ran concurrently, meaning
    that two values were buffered on the chan which has only capacity for one.
    Where is the flaw in this reasoning?
    You are assuming sequential consistency, it is not guaranteed.
    Your program is equivalent to:

    go func() {
    s += "a"
    c <- 1
    <-c
    wg.Done()
    }()
    go func() {
    s += "b"
    c <- 1
    <-c
    wg.Done()
    }()

    in which +=a and +=b run concurrently and chan capacity is not violated.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Steven Blenkinsop at Mar 8, 2013 at 6:38 pm

    On Fri, Mar 8, 2013 at 1:06 PM, Gordon Klaus wrote:
    On Fri, Mar 8, 2013 at 12:41 PM, Dmitry Vyukov wrote:

    Why do you think that the program is allowed to print only ab or ba?
    Because, as I stated, if it printed something else that would imply a
    violation of the spec. E.g. if it printed "a" or "b" then we would have to
    assume that the statements s += "a" and s += "b" ran concurrently, meaning
    that two values were buffered on the chan which has only capacity for one.
    Where is the flaw in this reasoning?
    The guarantee that reordering can't change observed behaviour only applies
    locally. Also, the order of memory accesses observed by one goroutine can
    be different from the order observed by another goroutine, as long as the
    ordering guarantees given in the memory model are met in each. Thus, we
    have to consider the program from the perspective of each goroutine
    separately:
    http://play.golang.org/p/IX2ZF0O1r0
    http://play.golang.org/p/i0oQhyKiXY

    Because the memory model makes no guarantees about the ordering of memory
    accesses that locally happen after a send or before a receive on a buffered
    channel, each of these programs is equivalent to the original you posted
    from the perspective of their respective goroutines. As you can see, in
    each case, the accesses to `s` can happen concurrently without exceeding
    the buffer capacity of the channel.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Gordon Klaus at Mar 8, 2013 at 7:10 pm
    OK, it makes sense now. Thanks for the explanation. I see now that this
    is already covered pretty well in http://golang.org/ref/mem . I should
    have given that another read. But your example was a little more
    enlightening.

    On Fri, Mar 8, 2013 at 1:37 PM, Steven Blenkinsop wrote:
    On Fri, Mar 8, 2013 at 1:06 PM, Gordon Klaus wrote:
    On Fri, Mar 8, 2013 at 12:41 PM, Dmitry Vyukov wrote:

    Why do you think that the program is allowed to print only ab or ba?
    Because, as I stated, if it printed something else that would imply a
    violation of the spec. E.g. if it printed "a" or "b" then we would have to
    assume that the statements s += "a" and s += "b" ran concurrently, meaning
    that two values were buffered on the chan which has only capacity for one.
    Where is the flaw in this reasoning?
    The guarantee that reordering can't change observed behaviour only applies
    locally. Also, the order of memory accesses observed by one goroutine can
    be different from the order observed by another goroutine, as long as the
    ordering guarantees given in the memory model are met in each. Thus, we
    have to consider the program from the perspective of each goroutine
    separately:
    http://play.golang.org/p/IX2ZF0O1r0
    http://play.golang.org/p/i0oQhyKiXY

    Because the memory model makes no guarantees about the ordering of memory
    accesses that locally happen after a send or before a receive on a buffered
    channel, each of these programs is equivalent to the original you posted
    from the perspective of their respective goroutines. As you can see, in
    each case, the accesses to `s` can happen concurrently without exceeding
    the buffer capacity of the channel.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Steven Blenkinsop at Mar 8, 2013 at 7:43 pm

    On Friday, 8 March 2013, Gordon Klaus wrote:

    OK, it makes sense now. Thanks for the explanation. I see now that this
    is already covered pretty well in http://golang.org/ref/mem . I should
    have given that another read. But your example was a little more
    enlightening.
    Glad it helped. I'm just trying to work this through myself. I think I
    understand why the memory model implies this behaviour, but I still don't
    like it ;). It seems cheap, surprising, and pointless. The spec requires a
    certain ordering between receives and sends on a buffered channel, but
    because the memory model neglects to mention it, we can't use it to
    extrapolate an of other operations in relation to these communications.

    Also, the guarantees surrounding send and recieve statements depend on a
    dynamic quantity – the capacity of the channel, which isn't part of its
    type – which means that you don't know just from looking at a function what
    guarantees apply. Having consistent guarantees is possible, they've just
    chosen not to do it.

    Finally, it might not be obvious, but the example I posted requires
    something rather unnerving. In order for each goroutine to observe the
    transformation it does in the other, the they have to disagree on the order
    in which the values are put into and removed from the buffer

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Steven Blenkinsop at Mar 8, 2013 at 8:01 pm
    Sorry, premature send from iPhone...

    Glad it helped. I'm just trying to work this through myself. I think I
    understand why the memory model implies this behaviour, but I still don't
    like it ;). It seems cheap, surprising, and pointless. The spec requires a
    certain ordering between receives and sends on a buffered channel, but
    because the memory model neglects to mention it, we can't use it to
    extrapolate an order for other operations in relation to these
    communications.

    Also, the guarantees surrounding send and receive statements depend on a
    dynamic quantity – the capacity of the channel, which isn't part of its
    type – which means that you don't know just from looking at a function what
    guarantees apply. Having consistent guarantees is possible (by generalizing
    the guarantee for capacity >= 0), they've just chosen not to do it.

    Finally, it might not be obvious, but the example I posted requires
    something rather unnerving. In order for each goroutine to observe
    concurrent access to `s` based on the given transformations, they have to
    disagree on the order in which the values are put into and removed from the
    buffer. This single handedly invalidates any intuition someone might have
    about a channel's buffer acting as a synchronization variable. I'm not sure
    I can even think of all the possible implications right now. There are
    other possible transformations that work without raising this issue, but
    they can only allow at most two concurrent accesses to `s` (one access
    happens before the communications, and one happens after).

    On Fri, Mar 8, 2013 at 2:43 PM, Steven Blenkinsop wrote:
    On Friday, 8 March 2013, Gordon Klaus wrote:

    OK, it makes sense now. Thanks for the explanation. I see now that this
    is already covered pretty well in http://golang.org/ref/mem . I should
    have given that another read. But your example was a little more
    enlightening.
    Glad it helped. I'm just trying to work this through myself. I think I
    understand why the memory model implies this behaviour, but I still don't
    like it ;). It seems cheap, surprising, and pointless. The spec requires a
    certain ordering between receives and sends on a buffered channel, but
    because the memory model neglects to mention it, we can't use it to
    extrapolate an of other operations in relation to these communications.

    Also, the guarantees surrounding send and recieve statements depend on a
    dynamic quantity – the capacity of the channel, which isn't part of its
    type – which means that you don't know just from looking at a function what
    guarantees apply. Having consistent guarantees is possible, they've just
    chosen not to do it.

    Finally, it might not be obvious, but the example I posted requires
    something rather unnerving. In order for each goroutine to observe the
    transformation it does in the other, the they have to disagree on the order
    in which the values are put into and removed from the buffer
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dan Kortschak at Mar 8, 2013 at 8:47 pm
    This whole thread makes me a little queasy. I can see the justification in consistency, though I'm still working through the details. What concerns me is that the idiom of chans as mutexes is probably quite wide spread, certainly in the context raised here.

    I commonly use:

    var wg sync.WaitGroup
    sem := make(chan struct{}, N)
    for i := range work {
       <-sem
       wg.Add(1)
       go func(i int) {
         defer func() { <- struct{}{}; wg.Done() }
         ...
       }(i)
    }
    wg.Wait()

    This differs in that the sem is filled outside the goroutine call, so intuitively it feels like it does the right thing, but after reading this thread through I'm not sure I trust my intuition.

    Dan

    On 09/03/2013, at 6:31 AM, "Steven Blenkinsop" wrote:

    Finally, it might not be obvious, but the example I posted requires something rather unnerving. In order for each goroutine to observe concurrent access to `s` based on the given transformations, they have to disagree on the order in which the values are put into and removed from the buffer. This single handedly invalidates any intuition someone might have about a channel's buffer acting as a synchronization variable. I'm not sure I can even think of all the possible implications right now. There are other possible transformations that work without raising this issue, but they can only allow at most two concurrent accesses to `s` (one access happens before the communications, and one happens after).

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Matt at Mar 8, 2013 at 9:43 pm
    Wow this thread has run and run, some really interesting discussions,
    thanks!
    On Friday, 8 March 2013 20:47:10 UTC, kortschak wrote:

    This whole thread makes me a little queasy. I can see the justification
    in consistency, though I'm still working through the details. What concerns
    me is that the idiom of chans as mutexes is probably quite wide spread,
    certainly in the context raised here.

    I commonly use:

    var wg sync.WaitGroup
    sem := make(chan struct{}, N)
    for i := range work {
    <-sem
    wg.Add(1)
    go func(i int) {
    defer func() { <- struct{}{}; wg.Done() }
    ...
    }(i)
    }
    wg.Wait()

    This differs in that the sem is filled outside the goroutine call, so
    intuitively it feels like it does the right thing, but after reading this
    thread through I'm not sure I trust my intuition.

    Dan

    On 09/03/2013, at 6:31 AM, "Steven Blenkinsop" wrote:

    Finally, it might not be obvious, but the example I posted requires
    something rather unnerving. In order for each goroutine to observe
    concurrent access to `s` based on the given transformations, they have to
    disagree on the order in which the values are put into and removed from the
    buffer. This single handedly invalidates any intuition someone might have
    about a channel's buffer acting as a synchronization variable. I'm not sure
    I can even think of all the possible implications right now. There are
    other possible transformations that work without raising this issue, but
    they can only allow at most two concurrent accesses to `s` (one access
    happens before the communications, and one happens after).
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Mike Solomon at Mar 8, 2013 at 10:44 pm
    To reply in a slightly different context, I recently thought I needed this
    exact functionality and built exactly what you proposed.

    What I ended up doing after a couple of days of thinking is separating out
    the semaphore and the wait group aspects - using the semaphore at a lower
    level to limit the use of finite resources.

    https://code.google.com/p/vitess/source/detail?r=cad9a1278e40e1cc705d4666d06c65e47f68e291
    On Friday, March 8, 2013 1:43:03 PM UTC-8, matt wrote:

    Wow this thread has run and run, some really interesting discussions,
    thanks!
    On Friday, 8 March 2013 20:47:10 UTC, kortschak wrote:

    This whole thread makes me a little queasy. I can see the justification
    in consistency, though I'm still working through the details. What concerns
    me is that the idiom of chans as mutexes is probably quite wide spread,
    certainly in the context raised here.

    I commonly use:

    var wg sync.WaitGroup
    sem := make(chan struct{}, N)
    for i := range work {
    <-sem
    wg.Add(1)
    go func(i int) {
    defer func() { <- struct{}{}; wg.Done() }
    ...
    }(i)
    }
    wg.Wait()

    This differs in that the sem is filled outside the goroutine call, so
    intuitively it feels like it does the right thing, but after reading this
    thread through I'm not sure I trust my intuition.

    Dan

    On 09/03/2013, at 6:31 AM, "Steven Blenkinsop" wrote:

    Finally, it might not be obvious, but the example I posted requires
    something rather unnerving. In order for each goroutine to observe
    concurrent access to `s` based on the given transformations, they have to
    disagree on the order in which the values are put into and removed from the
    buffer. This single handedly invalidates any intuition someone might have
    about a channel's buffer acting as a synchronization variable. I'm not sure
    I can even think of all the possible implications right now. There are
    other possible transformations that work without raising this issue, but
    they can only allow at most two concurrent accesses to `s` (one access
    happens before the communications, and one happens after).
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Sugu Sougoumarane at Mar 8, 2013 at 10:52 pm
    Based on previous comments, I believe the semaphore implementation is
    technically incorrect, which also makes the example in the Effective Go
    document incorrect?
    On Friday, March 8, 2013 2:44:39 PM UTC-8, Mike Solomon wrote:

    To reply in a slightly different context, I recently thought I needed this
    exact functionality and built exactly what you proposed.

    What I ended up doing after a couple of days of thinking is separating out
    the semaphore and the wait group aspects - using the semaphore at a lower
    level to limit the use of finite resources.


    https://code.google.com/p/vitess/source/detail?r=cad9a1278e40e1cc705d4666d06c65e47f68e291
    On Friday, March 8, 2013 1:43:03 PM UTC-8, matt wrote:

    Wow this thread has run and run, some really interesting discussions,
    thanks!
    On Friday, 8 March 2013 20:47:10 UTC, kortschak wrote:

    This whole thread makes me a little queasy. I can see the
    justification in consistency, though I'm still working through the details.
    What concerns me is that the idiom of chans as mutexes is probably quite
    wide spread, certainly in the context raised here.

    I commonly use:

    var wg sync.WaitGroup
    sem := make(chan struct{}, N)
    for i := range work {
    <-sem
    wg.Add(1)
    go func(i int) {
    defer func() { <- struct{}{}; wg.Done() }
    ...
    }(i)
    }
    wg.Wait()

    This differs in that the sem is filled outside the goroutine call, so
    intuitively it feels like it does the right thing, but after reading this
    thread through I'm not sure I trust my intuition.

    Dan

    On 09/03/2013, at 6:31 AM, "Steven Blenkinsop" <stev...@gmail.com>
    wrote:

    Finally, it might not be obvious, but the example I posted requires
    something rather unnerving. In order for each goroutine to observe
    concurrent access to `s` based on the given transformations, they have to
    disagree on the order in which the values are put into and removed from the
    buffer. This single handedly invalidates any intuition someone might have
    about a channel's buffer acting as a synchronization variable. I'm not sure
    I can even think of all the possible implications right now. There are
    other possible transformations that work without raising this issue, but
    they can only allow at most two concurrent accesses to `s` (one access
    happens before the communications, and one happens after).
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Gordon Klaus at Mar 9, 2013 at 12:19 am
    I guess you meant to write sem <- struct{}{} on the first line of the loop,
    and <-sem in the deferred func.

    You are right to doubt your intuition here. Because it is a buffered
    channel, the only guarantee is that the send will happen before its
    corresponding receive. There is no guarantee of limiting behavior,
    although I get the impression that it will almost certainly work in most
    implementations.

    Even though it will probably work fine this way, I think the other way
    around (swapping send and receive, prefilling the chan) is actually
    clearer: Think of the channel as a pool of limited resources. (In your
    example there is no data associated with the resource, but it could be
    files or net.Conns or whatever.) Ownership of a resource allows the owner
    to do work; when done, the owner returns the resource to the pool, allowing
    another worker to use it. In the static case, a buffered pool is filled
    synchronously before processing begins. In general, another goroutine
    could dynamically provide the resource, adjusting the number available as
    it sees fit (here, the channel need not be buffered).

    On second thought, if the intent is only to limit the number of goroutines
    then it is probably clearest to just launch exactly that many goroutines
    and let them read work items from a channel. It think this was mentioned
    recently on another thread.

    On Fri, Mar 8, 2013 at 3:47 PM, Dan Kortschak wrote:

    This whole thread makes me a little queasy. I can see the justification
    in consistency, though I'm still working through the details. What concerns
    me is that the idiom of chans as mutexes is probably quite wide spread,
    certainly in the context raised here.

    I commonly use:

    var wg sync.WaitGroup
    sem := make(chan struct{}, N)
    for i := range work {
    <-sem
    wg.Add(1)
    go func(i int) {
    defer func() { <- struct{}{}; wg.Done() }
    ...
    }(i)
    }
    wg.Wait()

    This differs in that the sem is filled outside the goroutine call, so
    intuitively it feels like it does the right thing, but after reading this
    thread through I'm not sure I trust my intuition.

    Dan

    On 09/03/2013, at 6:31 AM, "Steven Blenkinsop" wrote:

    Finally, it might not be obvious, but the example I posted requires
    something rather unnerving. In order for each goroutine to observe
    concurrent access to `s` based on the given transformations, they have to
    disagree on the order in which the values are put into and removed from the
    buffer. This single handedly invalidates any intuition someone might have
    about a channel's buffer acting as a synchronization variable. I'm not sure
    I can even think of all the possible implications right now. There are
    other possible transformations that work without raising this issue, but
    they can only allow at most two concurrent accesses to `s` (one access
    happens before the communications, and one happens after).
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dan Kortschak at Mar 9, 2013 at 12:47 am

    On 09/03/2013, at 10:49 AM, "Gordon Klaus" wrote:

    I guess you meant to write sem <- struct{}{} on the first line of the loop, and <-sem in the deferred func.
    Indeed. Thank you for reading through my lack of coffee.
    You are right to doubt your intuition here. Because it is a buffered channel, the only guarantee is that the send will happen before its corresponding receive. There is no guarantee of limiting behavior, although I get the impression that it will almost certainly work in most implementations.
    Yes, it works now. But I have in mind what will potentially change.
    Even though it will probably work fine this way, I think the other way around (swapping send and receive, prefilling the chan) is actually clearer: Think of the channel as a pool of limited resources.
    Yes. Token hand-out and recovery.
    (In your example there is no data associated with the resource, but it could be files or net.Conns or whatever.) Ownership of a resource allows the owner to do work; when done, the owner returns the resource to the pool, allowing another worker to use it. In the static case, a buffered pool is filled synchronously before processing begins. In general, another goroutine could dynamically provide the resource, adjusting the number available as it sees fit (here, the channel need not be buffered).

    On second thought, if the intent is only to limit the number of goroutines then it is probably clearest to just launch exactly that many goroutines and let them read work items from a channel. It think this was mentioned recently on another thread.
    I'll obviously need to revisit some of my code.

    thanks
    Dan

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Gordon Klaus at Mar 8, 2013 at 11:21 pm
    On the one hand, I can appreciate the ideal that channels only synchronize
    when data is exchanged. If data exchange is their purpose, then any other
    use could be considered abuse. On the other hand, they don't
    synchronize only the data that is passed on the channel; any writes that
    happen before a channel send are visible to reads that happen after the
    corresponding receive. It could be argued that, because they provide so
    much in the way of synchronization, it is also their purpose -- not merely
    a side-effect. Then, shouldn't they fulfill that purpose completely and
    consistently, as you say?

    I guess the Go authors have a good reasons. They seem to have thought of
    everything. We'd probably be wiser not to distract them as they work so
    diligently to put Go 1.1 out there for us ;)

    On Fri, Mar 8, 2013 at 3:01 PM, Steven Blenkinsop wrote:

    Sorry, premature send from iPhone...

    Glad it helped. I'm just trying to work this through myself. I think I
    understand why the memory model implies this behaviour, but I still don't
    like it ;). It seems cheap, surprising, and pointless. The spec requires a
    certain ordering between receives and sends on a buffered channel, but
    because the memory model neglects to mention it, we can't use it to
    extrapolate an order for other operations in relation to these
    communications.

    Also, the guarantees surrounding send and receive statements depend on a
    dynamic quantity – the capacity of the channel, which isn't part of its
    type – which means that you don't know just from looking at a function what
    guarantees apply. Having consistent guarantees is possible (by generalizing
    the guarantee for capacity >= 0), they've just chosen not to do it.

    Finally, it might not be obvious, but the example I posted requires
    something rather unnerving. In order for each goroutine to observe
    concurrent access to `s` based on the given transformations, they have to
    disagree on the order in which the values are put into and removed from the
    buffer. This single handedly invalidates any intuition someone might have
    about a channel's buffer acting as a synchronization variable. I'm not sure
    I can even think of all the possible implications right now. There are
    other possible transformations that work without raising this issue, but
    they can only allow at most two concurrent accesses to `s` (one access
    happens before the communications, and one happens after).

    On Fri, Mar 8, 2013 at 2:43 PM, Steven Blenkinsop wrote:
    On Friday, 8 March 2013, Gordon Klaus wrote:

    OK, it makes sense now. Thanks for the explanation. I see now that
    this is already covered pretty well in http://golang.org/ref/mem . I
    should have given that another read. But your example was a little more
    enlightening.
    Glad it helped. I'm just trying to work this through myself. I think I
    understand why the memory model implies this behaviour, but I still don't
    like it ;). It seems cheap, surprising, and pointless. The spec requires a
    certain ordering between receives and sends on a buffered channel, but
    because the memory model neglects to mention it, we can't use it to
    extrapolate an of other operations in relation to these communications.

    Also, the guarantees surrounding send and recieve statements depend on a
    dynamic quantity – the capacity of the channel, which isn't part of its
    type – which means that you don't know just from looking at a function what
    guarantees apply. Having consistent guarantees is possible, they've just
    chosen not to do it.

    Finally, it might not be obvious, but the example I posted requires
    something rather unnerving. In order for each goroutine to observe the
    transformation it does in the other, the they have to disagree on the order
    in which the values are put into and removed from the buffer
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Mar 9, 2013 at 11:27 am

    On Sat, Mar 9, 2013 at 3:20 AM, Gordon Klaus wrote:
    On the one hand, I can appreciate the ideal that channels only synchronize
    when data is exchanged. If data exchange is their purpose, then any other
    use could be considered abuse. On the other hand, they don't synchronize
    only the data that is passed on the channel; any writes that happen before a
    channel send are visible to reads that happen after the corresponding
    receive. It could be argued that, because they provide so much in the way
    of synchronization, it is also their purpose -- not merely a side-effect.
    Then, shouldn't they fulfill that purpose completely and consistently, as
    you say?

    You may be not realizing what it actually means to provide complete
    and consistent sequential consistency for chan operations.
    Consider the following two programs:
    http://play.golang.org/p/8eWCIhWpPv
    http://play.golang.org/p/JRxL3JTtX1
    If chan operations provide complete and consistent sequential
    consistency, then the programs are legal (should not print FAIL). But
    I am sure whether we actually want to make such use cases legal.
    Basically, under sequential consistency any operation that reveals
    internal state in any way, shape or form (e.g. panic from double close
    says you that somebody else has already close it) synchronizes with
    any operation that modifies internal state in any observable way (e.g.
    chan close makes chan send panic).
    And there is more to it. Under sequential consistency the following
    program should not fail. Goroutine 1 modifies state of chan 1 in some
    way. Concurrently G2 modifies state of chan 2 in some way. G3 and G4
    independently query modifications on chan 1 and 2. G3 and G4 expect to
    observe the modifications in the same order.

    The chan-based semaphore/mutex falls into the same category. It tries
    to observe secondary side effects of some operations in some strange
    way, and then infer some inter-goroutine relations form that based on
    sequential consistency assumption. You do not want to go there.
    Moreover, it can actually make implementations slower on architectures
    with relaxed memory model (ARM/POWER) and especially on DSM
    (distributed shared memory) systems where any synchronization is
    extremely expensive.
    Moreover, it will make the dynamic data race detector miss lots of
    bugs, because it will have to assume that any chan operation
    synchronizes with any other chan operation just in case.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Gordon Klaus at Mar 9, 2013 at 2:28 pm
    Thanks for your illuminating response.
    On Sat, Mar 9, 2013 at 6:27 AM, Dmitry Vyukov wrote:

    You may be not realizing what it actually means to provide complete
    and consistent sequential consistency for chan operations.
    This is true. It was also a poor choice of words. I was only referring to
    the completeness/consistency that Steven had described: guaranteeing that
    the Mth receive happens before the M+Nth send completes on a N-buffered
    channel. I didn't even think about all the other guarantees that could be
    made: buffered send->send (your 1st program below); close->close (2nd
    program); close->send; maybe even receive->receive?

    The current guarantees are only send->receive, close->receive, and
    unbuffered receive->send. So fewer than half of the total possible, and
    only those necessary for data exchange. I guess channels really are just
    about data exchange, and their synchronization a side-effect only to be
    relied upon in these specific cases. I wonder if it would be a good idea
    to publish this more visibly. http://golang.org/ref/mem has always given
    me the impression of a very technical document non-essential to most users;
    but it's not so. At the least, there should be a link to it from
    http://golang.org/ref/spec#Channel_types, no?

    Consider the following two programs:
    http://play.golang.org/p/8eWCIhWpPv
    http://play.golang.org/p/JRxL3JTtX1
    If chan operations provide complete and consistent sequential
    consistency, then the programs are legal (should not print FAIL). But
    I am sure whether we actually want to make such use cases legal.
    Basically, under sequential consistency any operation that reveals
    internal state in any way, shape or form (e.g. panic from double close
    says you that somebody else has already close it) synchronizes with
    any operation that modifies internal state in any observable way (e.g.
    chan close makes chan send panic).
    And there is more to it. Under sequential consistency the following
    program should not fail. Goroutine 1 modifies state of chan 1 in some
    way. Concurrently G2 modifies state of chan 2 in some way. G3 and G4
    independently query modifications on chan 1 and 2. G3 and G4 expect to
    observe the modifications in the same order.

    The chan-based semaphore/mutex falls into the same category. It tries
    to observe secondary side effects of some operations in some strange
    way, and then infer some inter-goroutine relations form that based on
    sequential consistency assumption. You do not want to go there.
    Moreover, it can actually make implementations slower on architectures
    with relaxed memory model (ARM/POWER) and especially on DSM
    (distributed shared memory) systems where any synchronization is
    extremely expensive.
    The reason we're using concurrency in the first place is for speed, so it
    makes sense that our communication primitives should have minimal overhead.

    Moreover, it will make the dynamic data race detector miss lots of
    bugs, because it will have to assume that any chan operation
    synchronizes with any other chan operation just in case.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Mar 10, 2013 at 5:01 pm

    On Sat, Mar 9, 2013 at 6:28 PM, Gordon Klaus wrote:
    Thanks for your illuminating response.
    On Sat, Mar 9, 2013 at 6:27 AM, Dmitry Vyukov wrote:

    You may be not realizing what it actually means to provide complete
    and consistent sequential consistency for chan operations.

    This is true. It was also a poor choice of words. I was only referring to
    the completeness/consistency that Steven had described: guaranteeing that
    the Mth receive happens before the M+Nth send completes on a N-buffered
    channel. I didn't even think about all the other guarantees that could be
    made: buffered send->send (your 1st program below); close->close (2nd
    program); close->send; maybe even receive->receive?
    Yes, receive->receive is also possible. If you put 1 element and then
    do 2 non-blocking receives. Under sequential consistency failed
    receive allows you to infer that another receive has already
    succeeded.


    Regarding guaranteeing that the Mth receive happens before the M+Nth
    send completes on a N-buffered channel.
    I don't have a strong opinion here. Definitely it's possible to
    guarantee that. But it will slow down some implementations (for
    ring-buffer impl has to guarantee that to prevent races on the buffer,
    but for linked-list impls it is not the case). Current guarantees are
    nice in the sanse that they are limited to "data exchange". You can
    think of chan-based semaphore as follows -- initially chan contains N
    tokens, in order to conduct some operations the goroutine needs to own
    some of the tokens, and put the token back once the operations is
    completed. So it's "data exchange" in some sense as well.



    The current guarantees are only send->receive, close->receive, and
    unbuffered receive->send. So fewer than half of the total possible, and
    only those necessary for data exchange. I guess channels really are just
    about data exchange, and their synchronization a side-effect only to be
    relied upon in these specific cases. I wonder if it would be a good idea to
    publish this more visibly. http://golang.org/ref/mem has always given me
    the impression of a very technical document non-essential to most users; but
    it's not so. At the least, there should be a link to it from
    http://golang.org/ref/spec#Channel_types, no?
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dan Kortschak at Mar 9, 2013 at 4:45 pm
    Does all this mean that the examples in the memory model document (http://golang.org/ref/mem#tmp_6) are illegal since the assignment to a is not directly involving the chan operation?

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Mar 9, 2013 at 4:48 pm

    On Sat, Mar 9, 2013 at 8:45 PM, Dan Kortschak wrote:
    Does all this mean that the examples in the memory model document (http://golang.org/ref/mem#tmp_6) are illegal since the assignment to a is not directly involving the chan operation?
    This one looks correct, send on chan happens before receive on chan.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Robert at Mar 9, 2013 at 4:52 pm

    On Saturday, 9 March 2013 17:45:18 UTC+1, kortschak wrote:

    Does all this mean that the examples in the memory model document (
    http://golang.org/ref/mem#tmp_6) are illegal since the assignment to a is
    not directly involving the chan operation?
    No. I don't quite understand what do you mean by `directly involving', so
    I'll just explain why I think otherwise.

    In the first example, the chan receive must happen-after the chan send.
    Thus, everything that happens-before the chan send will happen-before the
    chan receive. So the assignment to a will happen-before the chan receive.
    Print(a) happens-after the chan-receive, so it also happens-after the
    assignment.

    The second example is very similar: for unbuffered channels, the
    corresponding chan send and receive are guaranteed to happen simultaneously
    (that it, the receive must happen-after the send starts and happen-before
    the send completes).

    Robert

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dan Kortschak at Mar 9, 2013 at 5:14 pm
    No, you're right. I should post when I wake up at 3am. I was complicating it in my head.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dan Kortschak at Mar 9, 2013 at 5:16 pm
    Just to clarify. I was getting confused because of Dmitry's discussion about hoisting.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Kyle Lemons at Mar 11, 2013 at 12:59 am

    On Fri, Mar 8, 2013 at 1:05 AM, Dmitry Vyukov wrote:
    On Fri, Mar 8, 2013 at 5:38 AM, Gordon Klaus wrote:
    That makes sense. Thanks for the explanation. I guess what Russ meant was
    that the two operations (reading from and writing to a full buffered
    channel) are unrelated insofar as the purpose of channels is concerned:
    Channels are for communicating data; they also provide synchronization, but
    only enough to enable this communication.

    As you pointed out, the spec does in fact ensure that the write to a full
    channel will not proceed before a read makes space for it. So the proposed
    WaitGroup limiting should work in any correct implementation of the
    language.

    No, that's not true. It's perfectly fine to allow any code to hoist
    above chan send, or to sink below chan receive. So, what you are
    saying regarding chans is true, but it won't limit number of
    concurrently executing goroutines in any way.
    How does this apply to synchronous channels? Obviously such reordering
    makes sense with an asynchronous channel, but synchronous channels have
    stronger ordering guarantees. So, would such optimizations necessarily be
    run-time optimizations in the cases where the channel's buffering cannot be
    known? This distinction (as mentioned elsewhere in the thread) can make a
    particular piece of code valid given some channels but not others. That
    seems confusing at the very least, and highly problematic at the very
    worst (especially to those reading the code).

    Sure enough, the optimization you described is present.
    http://play.golang.org/p/78ycD3sPZj On the playground, it's totally
    aggressive, completely emptying the channel before accepting any more
    values. Interestingly, the same code on my machine and the same version
    (1.0.3) is not as aggressive, in fact sometimes it perfectly interleaves the
    sends and receives.

    On Thu, Mar 7, 2013 at 6:53 PM, Kyle Lemons wrote:

    On Thu, Mar 7, 2013 at 9:12 AM, Gordon Klaus <gordon.klaus@gmail.com>
    wrote:
    On Wed, Mar 6, 2013 at 3:38 PM, bryanturley <bryanturley@gmail.com>
    wrote:
    On Wednesday, March 6, 2013 2:07:46 PM UTC-6, gordon...@gmail.comwrote:
    That's a curious little hole in the memory model. It says
    A receive from an unbuffered channel happens before the send on that
    channel completes.
    but I'd like it to say
    A receive from an unbuffered (or buffered, but full) channel happens
    before the send on that channel completes.
    Well if it receives on a full buffered channel it will be from the
    front, and your sent data will be at the end (FIFO).
    I wouldn't consider that the same as an unbuffered channel since the
    received data is not the same as the sent data.
    Perhaps in the case of make(chan T, 1) ?
    Yes, I see what you mean. Even in the case of make(chan T, 1), the
    receive gets the buffered value while the send pushes a different
    value.
    I was thinking in terms of operations associated via synchronization,
    not
    necessarily by data. Russ writes (in the linked thread) as if the
    data part
    were essential: "... moving one value out of the buffer to make room
    for a
    different value... The two are, strictly speaking, unrelated
    operations."
    My question is: what need is there for such a strict sense?

    If I understand it correctly, the relaxed requirements might enable
    optimizations during removal of elements from a buffered channel. In
    other
    words, in a tight loop you could pull all of the elements off the
    channel
    before blocking on the read, and only at that point would you block and
    the
    runtime go look for a sender waiting to fill it up. (the current
    behavior
    is similar to this).
    --
    You received this message because you are subscribed to the Google
    Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Mar 11, 2013 at 5:40 am

    On Mon, Mar 11, 2013 at 4:58 AM, Kyle Lemons wrote:
    On Fri, Mar 8, 2013 at 1:05 AM, Dmitry Vyukov wrote:

    On Fri, Mar 8, 2013 at 5:38 AM, Gordon Klaus <gordon.klaus@gmail.com>
    wrote:
    That makes sense. Thanks for the explanation. I guess what Russ meant
    was
    that the two operations (reading from and writing to a full buffered
    channel) are unrelated insofar as the purpose of channels is concerned:
    Channels are for communicating data; they also provide synchronization,
    but
    only enough to enable this communication.

    As you pointed out, the spec does in fact ensure that the write to a
    full
    channel will not proceed before a read makes space for it. So the
    proposed
    WaitGroup limiting should work in any correct implementation of the
    language.

    No, that's not true. It's perfectly fine to allow any code to hoist
    above chan send, or to sink below chan receive. So, what you are
    saying regarding chans is true, but it won't limit number of
    concurrently executing goroutines in any way.

    How does this apply to synchronous channels? Obviously such reordering
    makes sense with an asynchronous channel, but synchronous channels have
    stronger ordering guarantees. So, would such optimizations necessarily be
    run-time optimizations in the cases where the channel's buffering cannot be
    known?
    Yes, I was thinking about memory accesses reordering in processor pipeline.
    This distinction (as mentioned elsewhere in the thread) can make a
    particular piece of code valid given some channels but not others.
    Can you show such piece of code?

    That
    seems confusing at the very least, and highly problematic at the very worst
    (especially to those reading the code).
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Kyle Lemons at Mar 11, 2013 at 6:51 am

    On Sun, Mar 10, 2013 at 10:40 PM, Dmitry Vyukov wrote:
    On Mon, Mar 11, 2013 at 4:58 AM, Kyle Lemons wrote:
    On Fri, Mar 8, 2013 at 1:05 AM, Dmitry Vyukov wrote:

    On Fri, Mar 8, 2013 at 5:38 AM, Gordon Klaus <gordon.klaus@gmail.com>
    wrote:
    That makes sense. Thanks for the explanation. I guess what Russ
    meant
    was
    that the two operations (reading from and writing to a full buffered
    channel) are unrelated insofar as the purpose of channels is
    concerned:
    Channels are for communicating data; they also provide
    synchronization,
    but
    only enough to enable this communication.

    As you pointed out, the spec does in fact ensure that the write to a
    full
    channel will not proceed before a read makes space for it. So the
    proposed
    WaitGroup limiting should work in any correct implementation of the
    language.

    No, that's not true. It's perfectly fine to allow any code to hoist
    above chan send, or to sink below chan receive. So, what you are
    saying regarding chans is true, but it won't limit number of
    concurrently executing goroutines in any way.

    How does this apply to synchronous channels? Obviously such reordering
    makes sense with an asynchronous channel, but synchronous channels have
    stronger ordering guarantees. So, would such optimizations necessarily be
    run-time optimizations in the cases where the channel's buffering cannot be
    known?
    Yes, I was thinking about memory accesses reordering in processor pipeline.
    This distinction (as mentioned elsewhere in the thread) can make a
    particular piece of code valid given some channels but not others.
    Can you show such piece of code?

    To slightly modify a piece of code from the memory model:

    func g1(a *string, c chan bool) {
       *a = "hello" // 1
       <-c // 2
    }

    func g2(c chan bool) {
       var a string
       go g1(&a, c)
       c <- 0 // 3
       print(a) // 4
    }

    g2(make(chan bool)) // valid
    g2(make(chan bool,1)) // invalid

    The way I read it, the assignment to *a only happens before the print in
    the second case.

    The only guarantees, according to the memory model, are:
    - a send on a channel [3] happens before the corresponding receive [2]
    completes
    - within g1, [1] happens before [2]
    - within g2, [3] happens before [4]

    Thus, a correct total ordering would be [3(send), 4, 1, 2(recv)], and so
    [1] is not guaranteed to happen before [4].

    In the first case, the additional constraint makes it unambiguous:
    - a receive from an unbuffered channel [2] happens before the send on the
    channel [3] completes

    So a correct total ordering would be [3(send), 1, 2(recv), 2(complete),
    3(complete), 4]. The various reorderings of this still hold that [1]
    happens before [4].

    That
    seems confusing at the very least, and highly problematic at the very worst
    (especially to those reading the code).
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

Related Discussions

People

Translate

site design / logo © 2021 Grokbase