FAQ
Based on this idea and sample code I ended up writing an entire package
that implements a bunch of related ideas in this area:

https://github.com/eapache/channels
https://godoc.org/github.com/eapache/channels

It includes channels with "infinite" buffers channels with
finite-but-resizable buffers and a bunch of other useful types and
functions.
On Friday, February 3, 2012 2:32:28 PM UTC-5, Marcel wrote:

You can implement your own dynamic buffered channel with a slice and
use two channels to push and pop buffer values.
Like in this example "a channel" with a dynamic buffer:
http://play.golang.org/p/AiHBsxTFpj
The example could be extended to have a maximum buffer size and
multiple receivers.
It is probably also possible to hack something together where you
extend the channel type to get a result where you can use the dynamic
buffered channel as a normal channel.
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Search Discussions

  • Øyvind Teig at Jan 12, 2014 at 8:46 pm
    This is *very* interesting!

    There also is a matured channel class library in JCSP: Communicating
    Sequential Processes for Java. It's matured with respect to offered
    functionality as well as having been thoroughly debugged for some 15 years.
    See http://www.cs.kent.ac.uk/projects/ofa/jcsp/. Although it's built on
    Java, which does not have any channel primitive, the interface might be of
    interest.

    Personally I'd be thrilled to see 'xchan' also implemented (as a *first*,
    up to now it's only been modeled). Have a look at
    http://www.teigfam.net/oyvind/pub/pub_details.html#XCHAN

    (When that's been done, have a look at 'feathering' - to *avoid* sending
    (but Go does not have guards in select, so it's probably not possible or
    needed - but it certainly avoids sending unnecessary messages). Have a look
    at http://www.teigfam.net/oyvind/pub/pub_details.html#FEATHERING)

    Øyvind Teig
    Trondheim, Norway
    http://www.teigfam.net/oyvind/home/

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Jan 13, 2014 at 3:58 am

    On Mon, Jan 13, 2014 at 12:46 AM, Øyvind Teig wrote:
    This is very interesting!

    There also is a matured channel class library in JCSP: Communicating
    Sequential Processes for Java. It's matured with respect to offered
    functionality as well as having been thoroughly debugged for some 15 years.
    See http://www.cs.kent.ac.uk/projects/ofa/jcsp/. Although it's built on
    Java, which does not have any channel primitive, the interface might be of
    interest.

    Personally I'd be thrilled to see 'xchan' also implemented (as a first, up
    to now it's only been modeled). Have a look at
    http://www.teigfam.net/oyvind/pub/pub_details.html#XCHAN
    Hi,

    What problems will it make easier to solve with Go?

    (When that's been done, have a look at 'feathering' - to avoid sending (but
    Go does not have guards in select, so it's probably not possible or needed -
    but it certainly avoids sending unnecessary messages). Have a look at
    http://www.teigfam.net/oyvind/pub/pub_details.html#FEATHERING)
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Øyvind Teig at Jan 13, 2014 at 9:04 am
    kl. 04:58:18 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
    What problems will it make easier to solve with Go?

    Best question there is!

    In the xchan paper's Appendix I have two Go examples (courtesy of
    golang-nuts). However, answering this question I think the golang-nuts
    group would be more qualified to do than I. I have mentioned xchan before
    in this group, but not really suggested it as needed for Go. But then, in
    the scope of this thread I thought it worth mentioning. Observe that the
    two papers are peer reviewed under a rather rigorous regime (CPA).

    The xchan paper also discusses the semantic differences between xchan and
    (output) guards in select. Go does not have the boolean expression of
    guards as first-class citizen, but it can simulate them and thus
    effectively have a flavour of them. Therefore I infer that xchan in Go
    would add something different. And with the introduction of 'feathering' in
    the second paper the difference is even further emphasized. I am not
    certain if feathering could be introduced if it were not for xchan.

    One may view channels as the "goto of communication". In many use cases
    they are used to build higher level patterns like f.ex. some kind of
    transactions. If these patterns are then supported by the language which is
    able to run 'usage checks' on them, then we see that chan is only the
    beginning. The library provided for this thread addresses this, and also
    contains an overflowing channel type, a comon way to try to make matters
    safe. On the occam world we made oveflowing buffers to simulate this.

    The 'xchan' is one such pattern. It joins asynchronous (buffered
    non-blocking) and synchronous (blocking) thinking and practice. xchan
    provides a 'safe' asynchronous mechanism on a synchronized foundation. I
    have used several blog notes recently trying to understand the common view
    that blocking is perceived as 'evil' and asynchronous is all that makes
    sense. I think I have been able to explain. This view is so common that I
    believe it is the main threat to Go, which easily may fail to tell that
    blocking is not evil. Even in golang-nuts I see it often shine through that
    chan needs to be buffered. I added buffering in xchan simply to avoid a red
    cloth to most programmers, but it's not needed. I fail to see any system
    where internal buffering + xchan with no buffering is not best.

    The 'feathering' is also one such pattern - where explicit non-interest
    saves us communications. It's an implicit type of subscription mechanism.

    I have suggested at least one example of xchan use in the paper. A server
    connected to an incoming connection, where the server never ever blocks
    because it empties itself over an xchan. So the server is always able to
    handle a connection. And overflow, flushing, prioritation atc. are handled
    by the server *application*.

    xchan could potentially also help moving Go into the safety-critical
    (embedded) world, but I guess that is a far cry out.

    Øyvind








    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Jan 13, 2014 at 9:42 am

    On Mon, Jan 13, 2014 at 1:04 PM, Øyvind Teig wrote:
    kl. 04:58:18 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
    What problems will it make easier to solve with Go?

    Best question there is!

    In the xchan paper's Appendix I have two Go examples (courtesy of
    golang-nuts). However, answering this question I think the golang-nuts group
    would be more qualified to do than I. I have mentioned xchan before in this
    group, but not really suggested it as needed for Go. But then, in the scope
    of this thread I thought it worth mentioning. Observe that the two papers
    are peer reviewed under a rather rigorous regime (CPA).

    The xchan paper also discusses the semantic differences between xchan and
    (output) guards in select. Go does not have the boolean expression of guards
    as first-class citizen, but it can simulate them and thus effectively have a
    flavour of them. Therefore I infer that xchan in Go would add something
    different. And with the introduction of 'feathering' in the second paper the
    difference is even further emphasized. I am not certain if feathering could
    be introduced if it were not for xchan.

    One may view channels as the "goto of communication". In many use cases they
    are used to build higher level patterns like f.ex. some kind of
    transactions. If these patterns are then supported by the language which is
    able to run 'usage checks' on them, then we see that chan is only the
    beginning. The library provided for this thread addresses this, and also
    contains an overflowing channel type, a comon way to try to make matters
    safe. On the occam world we made oveflowing buffers to simulate this.

    The 'xchan' is one such pattern. It joins asynchronous (buffered
    non-blocking) and synchronous (blocking) thinking and practice. xchan
    provides a 'safe' asynchronous mechanism on a synchronized foundation. I
    have used several blog notes recently trying to understand the common view
    that blocking is perceived as 'evil' and asynchronous is all that makes
    sense. I think I have been able to explain. This view is so common that I
    believe it is the main threat to Go, which easily may fail to tell that
    blocking is not evil. Even in golang-nuts I see it often shine through that
    chan needs to be buffered. I added buffering in xchan simply to avoid a red
    cloth to most programmers, but it's not needed. I fail to see any system
    where internal buffering + xchan with no buffering is not best.

    The 'feathering' is also one such pattern - where explicit non-interest
    saves us communications. It's an implicit type of subscription mechanism.

    I have suggested at least one example of xchan use in the paper. A server
    connected to an incoming connection, where the server never ever blocks
    because it empties itself over an xchan. So the server is always able to
    handle a connection. And overflow, flushing, prioritation atc. are handled
    by the server application.

    Is it Section 5.2 Local ChanSched ANSI C with Channel-Ready-Channel?
    xchan could potentially also help moving Go into the safety-critical
    (embedded) world, but I guess that is a far cry out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Øyvind Teig at Jan 13, 2014 at 2:00 pm
    kl. 10:42:17 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
    On Mon, Jan 13, 2014 at 1:04 PM, Øyvind Teig wrote:

    kl. 04:58:18 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
    What problems will it make easier to solve with Go?

    Best question there is!

    In the xchan paper's Appendix I have two Go examples (courtesy of
    golang-nuts). However, answering this question I think the golang-nuts group
    would be more qualified to do than I. I have mentioned xchan before in this
    group, but not really suggested it as needed for Go. But then, in the scope
    of this thread I thought it worth mentioning. Observe that the two papers
    are peer reviewed under a rather rigorous regime (CPA).

    The xchan paper also discusses the semantic differences between xchan and
    (output) guards in select. Go does not have the boolean expression of guards
    as first-class citizen, but it can simulate them and thus effectively have a
    flavour of them. Therefore I infer that xchan in Go would add something
    different. And with the introduction of 'feathering' in the second paper the
    difference is even further emphasized. I am not certain if feathering could
    be introduced if it were not for xchan.

    One may view channels as the "goto of communication". In many use cases they
    are used to build higher level patterns like f.ex. some kind of
    transactions. If these patterns are then supported by the language which is
    able to run 'usage checks' on them, then we see that chan is only the
    beginning. The library provided for this thread addresses this, and also
    contains an overflowing channel type, a comon way to try to make matters
    safe. On the occam world we made oveflowing buffers to simulate this.

    The 'xchan' is one such pattern. It joins asynchronous (buffered
    non-blocking) and synchronous (blocking) thinking and practice. xchan
    provides a 'safe' asynchronous mechanism on a synchronized foundation. I
    have used several blog notes recently trying to understand the common view
    that blocking is perceived as 'evil' and asynchronous is all that makes
    sense. I think I have been able to explain. This view is so common that I
    believe it is the main threat to Go, which easily may fail to tell that
    blocking is not evil. Even in golang-nuts I see it often shine through that
    chan needs to be buffered. I added buffering in xchan simply to avoid a red
    cloth to most programmers, but it's not needed. I fail to see any system
    where internal buffering + xchan with no buffering is not best.

    The 'feathering' is also one such pattern - where explicit non-interest
    saves us communications. It's an implicit type of subscription
    mechanism.
    I have suggested at least one example of xchan use in the paper. A server
    connected to an incoming connection, where the server never ever blocks
    because it empties itself over an xchan. So the server is always able to
    handle a connection. And overflow, flushing, prioritation atc. are handled
    by the server application.

    Is it Section 5.2 Local ChanSched ANSI C with Channel-Ready-Channel?
    Yes, but that example is tied to a small embedded system with "hand coded"
    XCHAN and usage of it. The paper suggests a first-class citizen XCHAN, with
    usage checks by the compiler (if possible). Mentioning xchan in this
    golang-nuts thread would be a suggestion to try it out as a channel type in
    package channels.
    xchan could potentially also help moving Go into the safety-critical
    (embedded) world, but I guess that is a far cry out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Jan 13, 2014 at 2:25 pm

    On Mon, Jan 13, 2014 at 6:00 PM, Øyvind Teig wrote:
    kl. 10:42:17 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
    On Mon, Jan 13, 2014 at 1:04 PM, Øyvind Teig wrote:

    kl. 04:58:18 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
    What problems will it make easier to solve with Go?

    Best question there is!

    In the xchan paper's Appendix I have two Go examples (courtesy of
    golang-nuts). However, answering this question I think the golang-nuts
    group
    would be more qualified to do than I. I have mentioned xchan before in
    this
    group, but not really suggested it as needed for Go. But then, in the
    scope
    of this thread I thought it worth mentioning. Observe that the two
    papers
    are peer reviewed under a rather rigorous regime (CPA).

    The xchan paper also discusses the semantic differences between xchan
    and
    (output) guards in select. Go does not have the boolean expression of
    guards
    as first-class citizen, but it can simulate them and thus effectively
    have a
    flavour of them. Therefore I infer that xchan in Go would add something
    different. And with the introduction of 'feathering' in the second paper
    the
    difference is even further emphasized. I am not certain if feathering
    could
    be introduced if it were not for xchan.

    One may view channels as the "goto of communication". In many use cases
    they
    are used to build higher level patterns like f.ex. some kind of
    transactions. If these patterns are then supported by the language which
    is
    able to run 'usage checks' on them, then we see that chan is only the
    beginning. The library provided for this thread addresses this, and also
    contains an overflowing channel type, a comon way to try to make matters
    safe. On the occam world we made oveflowing buffers to simulate this.

    The 'xchan' is one such pattern. It joins asynchronous (buffered
    non-blocking) and synchronous (blocking) thinking and practice. xchan
    provides a 'safe' asynchronous mechanism on a synchronized foundation. I
    have used several blog notes recently trying to understand the common
    view
    that blocking is perceived as 'evil' and asynchronous is all that makes
    sense. I think I have been able to explain. This view is so common that
    I
    believe it is the main threat to Go, which easily may fail to tell that
    blocking is not evil. Even in golang-nuts I see it often shine through
    that
    chan needs to be buffered. I added buffering in xchan simply to avoid a
    red
    cloth to most programmers, but it's not needed. I fail to see any system
    where internal buffering + xchan with no buffering is not best.

    The 'feathering' is also one such pattern - where explicit non-interest
    saves us communications. It's an implicit type of subscription
    mechanism.

    I have suggested at least one example of xchan use in the paper. A
    server
    connected to an incoming connection, where the server never ever blocks
    because it empties itself over an xchan. So the server is always able to
    handle a connection. And overflow, flushing, prioritation atc. are
    handled
    by the server application.

    Is it Section 5.2 Local ChanSched ANSI C with Channel-Ready-Channel?
    Yes, but that example is tied to a small embedded system with "hand coded"
    XCHAN and usage of it. The paper suggests a first-class citizen XCHAN, with
    usage checks by the compiler (if possible). Mentioning xchan in this
    golang-nuts thread would be a suggestion to try it out as a channel type in
    package channels.
    xchan could potentially also help moving Go into the safety-critical
    (embedded) world, but I guess that is a far cry out.

    I may be missing something, but it seems to me that the use case is
    already perfectly covered by non-blocking sends in Go:

    for msg := range inchan {
       select {
       case outchan <- msg:
       default:
         // handle overflow (decide what value(s) to discard)
       }
    }

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Øyvind Teig at Jan 13, 2014 at 10:07 pm
    kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
    I may be missing something, but it seems to me that the use case is
    already perfectly covered by non-blocking sends in Go:

    for msg := range inchan {
    select {
    case outchan <- msg:
    default:
    // handle overflow (decide what value(s) to discard)
    }
    }
    Full and overflow are not the same. The fact that a message is not taken by
    the receiver is not the same as an overflow. It simply means that the
    receiver is not ready, or the channel buffer is full. The sending process
    (the "server") can continue to hold the value it tried to send but failed
    to get away with. Deciding that this needs overflow handling is premature.
    For the non-buffered case it could also mean that scheduling of the
    receiver has not been done yet, and that it would be ready when scheduled
    and immediately enter the select and then it would have taken the input.

    If we did like in the example above we would need to retry sending instead
    of just losing the message. We would have to do busy-poll sending. This is
    discussed in the paper. This is where the feedback channel of the xchan
    comes in. Xchan has three terminal points: send, receive and x-ready. The
    x-ready channel comes from the run-time system, not the receiver. I am
    therefore uncertain on whether a Go channel treated as bidirectional would
    be able to carry this scheme - and that xchan would be needed.

    When the x-ready arrives telling that the receiver is ready (or buffer has
    space) the server must send something. If it has got new input in the
    meantime it had indeed overflowed and can decide to send the newest (plus
    perhaps an overflow tag). If not, it would send the original message. It
    must contractually send something. This is where help from the compiler
    would be nice, to check this pattern. In the example above, having a
    non-buffered chan it would theoretically be possible to lose all messages
    or lose none, depending on scheduling.

    Handling of 'full' by the server could also mean to send a synchronizing
    stop message back to the external unit that sends to server, avoiding any
    overflow at that stage.

    In Go context there may be subtleties that I've got wrong, but that's
    basically how xchan works (in the paper). I'd certainly be happy to see
    xchan discussed by some of you Go experts here.

    The similarity with Linux select is discussed in the paper. In my original
    url there also is a model of xchan, done by prof. Peter H. Welch at
    UofKent. And I have modeled it in CSPm with FDR2.

    Øyvind

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Jan 14, 2014 at 7:23 am

    On Tue, Jan 14, 2014 at 2:07 AM, Øyvind Teig wrote:
    kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
    I may be missing something, but it seems to me that the use case is
    already perfectly covered by non-blocking sends in Go:

    for msg := range inchan {
    select {
    case outchan <- msg:
    default:
    // handle overflow (decide what value(s) to discard)
    }
    }

    Full and overflow are not the same. The fact that a message is not taken by
    the receiver is not the same as an overflow.
    What is the difference? You just need to set buffer size so that full==overflow.

    It simply means that the
    receiver is not ready, or the channel buffer is full. The sending process
    (the "server") can continue to hold the value it tried to send but failed to
    get away with. Deciding that this needs overflow handling is premature. For
    the non-buffered case it could also mean that scheduling of the receiver has
    not been done yet, and that it would be ready when scheduled and immediately
    enter the select and then it would have taken the input.

    If we did like in the example above we would need to retry sending instead
    of just losing the message. We would have to do busy-poll sending. This is
    discussed in the paper. This is where the feedback channel of the xchan
    comes in. Xchan has three terminal points: send, receive and x-ready. The
    x-ready channel comes from the run-time system, not the receiver. I am
    therefore uncertain on whether a Go channel treated as bidirectional would
    be able to carry this scheme - and that xchan would be needed.

    When the x-ready arrives telling that the receiver is ready (or buffer has
    space) the server must send something. If it has got new input in the
    meantime it had indeed overflowed and can decide to send the newest (plus
    perhaps an overflow tag). If not, it would send the original message. It
    must contractually send something. This is where help from the compiler
    would be nice, to check this pattern. In the example above, having a
    non-buffered chan it would theoretically be possible to lose all messages or
    lose none, depending on scheduling.

    Handling of 'full' by the server could also mean to send a synchronizing
    stop message back to the external unit that sends to server, avoiding any
    overflow at that stage.

    In Go context there may be subtleties that I've got wrong, but that's
    basically how xchan works (in the paper). I'd certainly be happy to see
    xchan discussed by some of you Go experts here.

    The similarity with Linux select is discussed in the paper. In my original
    url there also is a model of xchan, done by prof. Peter H. Welch at UofKent.
    And I have modeled it in CSPm with FDR2.

    Øyvind

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Øyvind Teig at Jan 14, 2014 at 7:59 am
    kl. 08:23:36 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov følgende:
    On Tue, Jan 14, 2014 at 2:07 AM, Øyvind Teig wrote:
    kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
    I may be missing something, but it seems to me that the use case is
    already perfectly covered by non-blocking sends in Go:

    for msg := range inchan {
    select {
    case outchan <- msg:
    default:
    // handle overflow (decide what value(s) to discard)
    }
    }

    Full and overflow are not the same. The fact that a message is not taken by
    the receiver is not the same as an overflow.
    What is the difference? You just need to set buffer size so that
    full==overflow.

    I did try to explain exactly that. Looking at border cases is always a good
    idea:

        - For a zero-buffered channel, in your case sending to a receiver that
        is not ready will cause overflow (0==0). That's simply wrong semantics, the
        sender shall block. And as we know blocking is not harmful or evil. The
        else-on-failed-send condition is there simply to be able to poll to see if
        there is a receiver, nice to have for some cases. And if you don't want to
        lose that message you'd have to do re-send by timeout or busy-poll. That's
        not CSP.
        - For an N-buffered channel you say that when the channel is full there
        is overflow (N==N). That's also wrong semantics. Overflow happens when you
        try to fill into a full buffer. (The fact that the buffered channel then
        needs to keep the old messages since there is no "flush" on a channel (and
        there should not be..) points toward having full control of the buffer and
        just move it inside a goroutine. So a zero-buffered chan or xchan in my
        opinion is the most useful channel.)

    Øyvind
    It simply means that the
    receiver is not ready, or the channel buffer is full. The sending process
    (the "server") can continue to hold the value it tried to send but failed to
    get away with. Deciding that this needs overflow handling is premature. For
    the non-buffered case it could also mean that scheduling of the receiver has
    not been done yet, and that it would be ready when scheduled and
    immediately
    enter the select and then it would have taken the input.

    If we did like in the example above we would need to retry sending instead
    of just losing the message. We would have to do busy-poll sending. This is
    discussed in the paper. This is where the feedback channel of the xchan
    comes in. Xchan has three terminal points: send, receive and x-ready. The
    x-ready channel comes from the run-time system, not the receiver. I am
    therefore uncertain on whether a Go channel treated as bidirectional would
    be able to carry this scheme - and that xchan would be needed.

    When the x-ready arrives telling that the receiver is ready (or buffer has
    space) the server must send something. If it has got new input in the
    meantime it had indeed overflowed and can decide to send the newest (plus
    perhaps an overflow tag). If not, it would send the original message. It
    must contractually send something. This is where help from the compiler
    would be nice, to check this pattern. In the example above, having a
    non-buffered chan it would theoretically be possible to lose all messages or
    lose none, depending on scheduling.

    Handling of 'full' by the server could also mean to send a synchronizing
    stop message back to the external unit that sends to server, avoiding any
    overflow at that stage.

    In Go context there may be subtleties that I've got wrong, but that's
    basically how xchan works (in the paper). I'd certainly be happy to see
    xchan discussed by some of you Go experts here.

    The similarity with Linux select is discussed in the paper. In my original
    url there also is a model of xchan, done by prof. Peter H. Welch at UofKent.
    And I have modeled it in CSPm with FDR2.

    Øyvind

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to [email protected] <javascript:>.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Jan 14, 2014 at 8:41 am

    On Tue, Jan 14, 2014 at 11:59 AM, Øyvind Teig wrote:
    kl. 08:23:36 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov følgende:
    On Tue, Jan 14, 2014 at 2:07 AM, Øyvind Teig wrote:
    kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
    I may be missing something, but it seems to me that the use case is
    already perfectly covered by non-blocking sends in Go:

    for msg := range inchan {
    select {
    case outchan <- msg:
    default:
    // handle overflow (decide what value(s) to discard)
    }
    }

    Full and overflow are not the same. The fact that a message is not taken
    by
    the receiver is not the same as an overflow.
    What is the difference? You just need to set buffer size so that
    full==overflow.

    I did try to explain exactly that. Looking at border cases is always a good
    idea:

    For a zero-buffered channel, in your case sending to a receiver that is not
    ready will cause overflow (0==0). That's simply wrong semantics, the sender
    shall block. And as we know blocking is not harmful or evil. The
    else-on-failed-send condition is there simply to be able to poll to see if
    there is a receiver, nice to have for some cases. And if you don't want to
    lose that message you'd have to do re-send by timeout or busy-poll. That's
    not CSP.
    For an N-buffered channel you say that when the channel is full there is
    overflow (N==N). That's also wrong semantics. Overflow happens when you try
    to fill into a full buffer. (The fact that the buffered channel then needs
    to keep the old messages since there is no "flush" on a channel (and there
    should not be..) points toward having full control of the buffer and just
    move it inside a goroutine. So a zero-buffered chan or xchan in my opinion
    is the most useful channel.)

    I probably understand what you want to do. Correct me if I am wrong,
    you want to implement arbitrary complex overload control on top of
    goroutines and channels.
    But, hey, Go has channels of channels, and this gives you natural way
    to say "I want a message. Now!":

    inc := make(chan *Req, 1000)
    outc := make(chan chan *Req, 1000)
    go func() {
       // dispatcher
       reqs := MakeHeapOrWhatever()
       for {
         select {
         case r := <-inc:
           reqs.Add(r)
         case c := <-outc:
           c <- reqs.Get()
         case <-time.After(...):
           reqs.Tick()
         }
       }
    }()

    for i := 0; i < 10; i++ {
       go func() {
         // consumer
         c := make(chan *Req, 1)
         for {
           outc <- c
           r := <-c
           Process(r)
         }
       }()
    }

    You also need to nil out outc in dispatcher select when reqs are
    empty, so that you don't receive from outc when you have nothing to
    send.

    And another option in Go is just to protect an arbitrary data
    structure with requests with a Mutex, and do arbitrary prioritization
    and eviction under the mutex. Plus a ticker goroutine to timeout
    requests.

    Still do not see a need in XCHANs.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Øyvind Teig at Jan 14, 2014 at 11:20 am
    kl. 09:40:45 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov følgende:
    On Tue, Jan 14, 2014 at 11:59 AM, Øyvind Teig wrote:

    kl. 08:23:36 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov følgende:
    On Tue, Jan 14, 2014 at 2:07 AM, Øyvind Teig wrote:
    kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov
    følgende:
    I may be missing something, but it seems to me that the use case is
    already perfectly covered by non-blocking sends in Go:

    for msg := range inchan {
    select {
    case outchan <- msg:
    default:
    // handle overflow (decide what value(s) to discard)
    }
    }

    Full and overflow are not the same. The fact that a message is not
    taken
    by
    the receiver is not the same as an overflow.
    What is the difference? You just need to set buffer size so that
    full==overflow.

    I did try to explain exactly that. Looking at border cases is always a good
    idea:

    For a zero-buffered channel, in your case sending to a receiver that is not
    ready will cause overflow (0==0). That's simply wrong semantics, the sender
    shall block. And as we know blocking is not harmful or evil. The
    else-on-failed-send condition is there simply to be able to poll to see if
    there is a receiver, nice to have for some cases. And if you don't want to
    lose that message you'd have to do re-send by timeout or busy-poll. That's
    not CSP.
    For an N-buffered channel you say that when the channel is full there is
    overflow (N==N). That's also wrong semantics. Overflow happens when you try
    to fill into a full buffer. (The fact that the buffered channel then needs
    to keep the old messages since there is no "flush" on a channel (and there
    should not be..) points toward having full control of the buffer and just
    move it inside a goroutine. So a zero-buffered chan or xchan in my opinion
    is the most useful channel.)

    I probably understand what you want to do. Correct me if I am wrong,
    you want to implement arbitrary complex overload control on top of
    goroutines and channels.
    But, hey, Go has channels of channels, and this gives you natural way
    to say "I want a message. Now!":

    inc := make(chan *Req, 1000)
    outc := make(chan chan *Req, 1000)
    go func() {
    // dispatcher
    reqs := MakeHeapOrWhatever()
    for {
    select {
    case r := <-inc:
    reqs.Add(r)
    case c := <-outc:
    c <- reqs.Get()
    case <-time.After(...):
    reqs.Tick()
    }
    }
    }()

    for i := 0; i < 10; i++ {
    go func() {
    // consumer
    c := make(chan *Req, 1)
    for {
    outc <- c
    r := <-c
    Process(r)
    }
    }()
    }

    You also need to nil out outc in dispatcher select when reqs are
    empty, so that you don't receive from outc when you have nothing to
    send.
    Nice!

    Sending over a channel and then picking it up to use is a nice idiom.
    Occam-2 did not have it and I missed it. Occam-pi does, they call it MOBILE
    channels I believe. Go uses this to simulate the missing boolean expresion
    in a guard: in order to close out other goroutines in a specific session, a
    session start also sends over a reply channel or channels over which to
    communicate while that session goes on.

    I think this is what you are suggesting. Your code seems to be getting
    close to Peter Welch's model. But you have a timeout in there, which tells
    me there's something wrong with it. I am learning by reading code since I
    at short at writing any - so I'l have to be "clinical" about my analysis.

    Provided you did code a "model" of xchan, that's fine. I could do that in
    any channel based language by using what I have. But in the rationale for
    XCHAN that's not the point. The XCHAN by itself, as a primary citizen of
    the language joins asynch/nonblocking and synch/blocking and safe channels
    and breaks cycles to avoid deadlock in one chump - and might save Go from
    the asynchronous hordes out there (..) As any language feature, it's the
    basic need that would decide on whether to include it. Still I don't know
    if xchan is a good idea for Go... but I think I would know how and when I
    would use it if it were in there.

    Øyvind

    And another option in Go is just to protect an arbitrary data
    structure with requests with a Mutex, and do arbitrary prioritization
    and eviction under the mutex. Plus a ticker goroutine to timeout
    requests.

    Still do not see a need in XCHANs.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Jan 14, 2014 at 11:36 am

    On Tue, Jan 14, 2014 at 3:20 PM, Øyvind Teig wrote:
    kl. 08:23:36 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov følgende:
    On Tue, Jan 14, 2014 at 2:07 AM, Øyvind Teig <[email protected]>
    wrote:
    kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov
    følgende:
    I may be missing something, but it seems to me that the use case is
    already perfectly covered by non-blocking sends in Go:

    for msg := range inchan {
    select {
    case outchan <- msg:
    default:
    // handle overflow (decide what value(s) to discard)
    }
    }

    Full and overflow are not the same. The fact that a message is not
    taken
    by
    the receiver is not the same as an overflow.
    What is the difference? You just need to set buffer size so that
    full==overflow.

    I did try to explain exactly that. Looking at border cases is always a
    good
    idea:

    For a zero-buffered channel, in your case sending to a receiver that is
    not
    ready will cause overflow (0==0). That's simply wrong semantics, the
    sender
    shall block. And as we know blocking is not harmful or evil. The
    else-on-failed-send condition is there simply to be able to poll to see
    if
    there is a receiver, nice to have for some cases. And if you don't want
    to
    lose that message you'd have to do re-send by timeout or busy-poll.
    That's
    not CSP.
    For an N-buffered channel you say that when the channel is full there is
    overflow (N==N). That's also wrong semantics. Overflow happens when you
    try
    to fill into a full buffer. (The fact that the buffered channel then
    needs
    to keep the old messages since there is no "flush" on a channel (and
    there
    should not be..) points toward having full control of the buffer and
    just
    move it inside a goroutine. So a zero-buffered chan or xchan in my
    opinion
    is the most useful channel.)

    I probably understand what you want to do. Correct me if I am wrong,
    you want to implement arbitrary complex overload control on top of
    goroutines and channels.
    But, hey, Go has channels of channels, and this gives you natural way
    to say "I want a message. Now!":

    inc := make(chan *Req, 1000)
    outc := make(chan chan *Req, 1000)
    go func() {
    // dispatcher
    reqs := MakeHeapOrWhatever()
    for {
    select {
    case r := <-inc:
    reqs.Add(r)
    case c := <-outc:
    c <- reqs.Get()
    case <-time.After(...):
    reqs.Tick()
    }
    }
    }()

    for i := 0; i < 10; i++ {
    go func() {
    // consumer
    c := make(chan *Req, 1)
    for {
    outc <- c
    r := <-c
    Process(r)
    }
    }()
    }

    You also need to nil out outc in dispatcher select when reqs are
    empty, so that you don't receive from outc when you have nothing to
    send.

    Nice!

    Sending over a channel and then picking it up to use is a nice idiom.
    Occam-2 did not have it and I missed it. Occam-pi does, they call it MOBILE
    channels I believe. Go uses this to simulate the missing boolean expresion
    in a guard: in order to close out other goroutines in a specific session, a
    session start also sends over a reply channel or channels over which to
    communicate while that session goes on.

    I think this is what you are suggesting. Your code seems to be getting close
    to Peter Welch's model. But you have a timeout in there, which tells me
    there's something wrong with it. I am learning by reading code since I at
    short at writing any - so I'l have to be "clinical" about my analysis.
    I've included timeouts to handle the following overload control policy:
    Assume you want to send TIMEOUT response to requests that are not
    serviced within 1 second.
    If the dispatcher reacts only to input messages and requests from
    workers, and there are no such messages/requests within 1 second, then
    the dispatcher can not send TIMEOUT responses (it does not get control
    within that second). So in the Tick() method you can scan all
    outstanding requests, find very old ones, remove them from the
    container and send TIMEOUT response.

    Provided you did code a "model" of xchan, that's fine. I could do that in
    any channel based language by using what I have. But in the rationale for
    XCHAN that's not the point. The XCHAN by itself, as a primary citizen of the
    language joins asynch/nonblocking and synch/blocking and safe channels and
    breaks cycles to avoid deadlock in one chump - and might save Go from the
    asynchronous hordes out there (..) As any language feature, it's the basic
    need that would decide on whether to include it. Still I don't know if xchan
    is a good idea for Go... but I think I would know how and when I would use
    it if it were in there.
    How and when would you use it?

    "joins asynch/nonblocking and synch/blocking and safe channels and
    breaks cycles to avoid deadlock in one chump" -- this sounds very
    abstract to me. For now I do not see any real-world problems with no
    clean ways to solve in Go w/o xchans.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Øyvind Teig at Jan 14, 2014 at 5:08 pm
    kl. 12:35:38 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov følgende:
    On Tue, Jan 14, 2014 at 3:20 PM, Øyvind Teig wrote:
    kl. 08:23:36 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov
    følgende:
    On Tue, Jan 14, 2014 at 2:07 AM, Øyvind Teig <[email protected]>
    wrote:
    kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov
    følgende:
    I may be missing something, but it seems to me that the use case
    is
    already perfectly covered by non-blocking sends in Go:

    for msg := range inchan {
    select {
    case outchan <- msg:
    default:
    // handle overflow (decide what value(s) to discard)
    }
    }

    Full and overflow are not the same. The fact that a message is not
    taken
    by
    the receiver is not the same as an overflow.
    What is the difference? You just need to set buffer size so that
    full==overflow.

    I did try to explain exactly that. Looking at border cases is always
    a
    good
    idea:

    For a zero-buffered channel, in your case sending to a receiver that
    is
    not
    ready will cause overflow (0==0). That's simply wrong semantics, the
    sender
    shall block. And as we know blocking is not harmful or evil. The
    else-on-failed-send condition is there simply to be able to poll to
    see
    if
    there is a receiver, nice to have for some cases. And if you don't
    want
    to
    lose that message you'd have to do re-send by timeout or busy-poll.
    That's
    not CSP.
    For an N-buffered channel you say that when the channel is full there
    is
    overflow (N==N). That's also wrong semantics. Overflow happens when
    you
    try
    to fill into a full buffer. (The fact that the buffered channel then
    needs
    to keep the old messages since there is no "flush" on a channel (and
    there
    should not be..) points toward having full control of the buffer and
    just
    move it inside a goroutine. So a zero-buffered chan or xchan in my
    opinion
    is the most useful channel.)

    I probably understand what you want to do. Correct me if I am wrong,
    you want to implement arbitrary complex overload control on top of
    goroutines and channels.
    But, hey, Go has channels of channels, and this gives you natural way
    to say "I want a message. Now!":

    inc := make(chan *Req, 1000)
    outc := make(chan chan *Req, 1000)
    go func() {
    // dispatcher
    reqs := MakeHeapOrWhatever()
    for {
    select {
    case r := <-inc:
    reqs.Add(r)
    case c := <-outc:
    c <- reqs.Get()
    case <-time.After(...):
    reqs.Tick()
    }
    }
    }()

    for i := 0; i < 10; i++ {
    go func() {
    // consumer
    c := make(chan *Req, 1)
    for {
    outc <- c
    r := <-c
    Process(r)
    }
    }()
    }

    You also need to nil out outc in dispatcher select when reqs are
    empty, so that you don't receive from outc when you have nothing to
    send.

    Nice!

    Sending over a channel and then picking it up to use is a nice idiom.
    Occam-2 did not have it and I missed it. Occam-pi does, they call it MOBILE
    channels I believe. Go uses this to simulate the missing boolean expresion
    in a guard: in order to close out other goroutines in a specific
    session, a
    session start also sends over a reply channel or channels over which to
    communicate while that session goes on.

    I think this is what you are suggesting. Your code seems to be getting close
    to Peter Welch's model. But you have a timeout in there, which tells me
    there's something wrong with it. I am learning by reading code since I at
    short at writing any - so I'l have to be "clinical" about my analysis.
    I've included timeouts to handle the following overload control policy:
    Assume you want to send TIMEOUT response to requests that are not
    serviced within 1 second.
    If the dispatcher reacts only to input messages and requests from
    workers, and there are no such messages/requests within 1 second, then
    the dispatcher can not send TIMEOUT responses (it does not get control
    within that second). So in the Tick() method you can scan all
    outstanding requests, find very old ones, remove them from the
    container and send TIMEOUT response.

    Provided you did code a "model" of xchan, that's fine. I could do that in
    any channel based language by using what I have. But in the rationale for
    XCHAN that's not the point. The XCHAN by itself, as a primary citizen of the
    language joins asynch/nonblocking and synch/blocking and safe channels and
    breaks cycles to avoid deadlock in one chump - and might save Go from the
    asynchronous hordes out there (..) As any language feature, it's the basic
    need that would decide on whether to include it. Still I don't know if xchan
    is a good idea for Go... but I think I would know how and when I would use
    it if it were in there.
    How and when would you use it?

    "joins asynch/nonblocking and synch/blocking and safe channels and
    breaks cycles to avoid deadlock in one chump" -- this sounds very
    abstract to me. For now I do not see any real-world problems with no
    clean ways to solve in Go w/o xchans.
    You may be right. But my paper does answer these questions in a
    non-abstract way.

    In the "real-world" anything blocking is 'evil'. I infer that Go's
    beautiful blocking channels (even buffered channels block when they are
    full) are considered horrifying by a large group of programmers. They would
    shy off. This is the *main* rationale for xchan (the rest is the 'abstract'
    paragraph above). When I programmed alone in occam I didn't miss it. I have
    tried to understand the mostly asynchronous world in some of these:
    http://www.teigfam.net/oyvind/home/technology/.

    Pn => S -> C ->

    I should have said this before: My wish would of course be to see any
    example code that does xchan semantics commented with "xchan idioms" from
    my paper and the figures there. There would be P(roducers) -> S(erver) ->
    C(onsumer) -> some hole. There would be local overflow handling in S and C
    is allowed to block on some hole forever (in which case all is lost). If C
    always gets rid of its data then Pn->S->C will not lose anything. The
    real-world may be in between. There is no dispatcher, as the x-ready
    feedback channel is triggered by the run-time. C is allowed to be read in a
    selective choice with other channels, and changing the input channel in C
    from reading on a chan to an xchan will not change a line in C. S is always
    ready to input data from any Pn (or else as specified). Timeout is a
    special case, not treated by xchan itself (and it should not be). The xchan
    between S and C may have zero buffering. And further: expanding xchan could
    open for feathering (but that I admit is harder with Go since there is no
    conditions in guards).

    Thank you!

    Øyvind

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Jan 15, 2014 at 10:36 am

    On Tue, Jan 14, 2014 at 9:08 PM, Øyvind Teig wrote:

    kl. 12:35:38 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov følgende:
    On Tue, Jan 14, 2014 at 3:20 PM, Øyvind Teig <[email protected]>
    wrote:
    kl. 08:23:36 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov
    følgende:
    On Tue, Jan 14, 2014 at 2:07 AM, Øyvind Teig <[email protected]>
    wrote:
    kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov
    følgende:
    I may be missing something, but it seems to me that the use case
    is
    already perfectly covered by non-blocking sends in Go:

    for msg := range inchan {
    select {
    case outchan <- msg:
    default:
    // handle overflow (decide what value(s) to discard)
    }
    }

    Full and overflow are not the same. The fact that a message is
    not
    taken
    by
    the receiver is not the same as an overflow.
    What is the difference? You just need to set buffer size so that
    full==overflow.

    I did try to explain exactly that. Looking at border cases is always
    a
    good
    idea:

    For a zero-buffered channel, in your case sending to a receiver that
    is
    not
    ready will cause overflow (0==0). That's simply wrong semantics, the
    sender
    shall block. And as we know blocking is not harmful or evil. The
    else-on-failed-send condition is there simply to be able to poll to
    see
    if
    there is a receiver, nice to have for some cases. And if you don't
    want
    to
    lose that message you'd have to do re-send by timeout or busy-poll.
    That's
    not CSP.
    For an N-buffered channel you say that when the channel is full
    there is
    overflow (N==N). That's also wrong semantics. Overflow happens when
    you
    try
    to fill into a full buffer. (The fact that the buffered channel then
    needs
    to keep the old messages since there is no "flush" on a channel (and
    there
    should not be..) points toward having full control of the buffer and
    just
    move it inside a goroutine. So a zero-buffered chan or xchan in my
    opinion
    is the most useful channel.)

    I probably understand what you want to do. Correct me if I am wrong,
    you want to implement arbitrary complex overload control on top of
    goroutines and channels.
    But, hey, Go has channels of channels, and this gives you natural way
    to say "I want a message. Now!":

    inc := make(chan *Req, 1000)
    outc := make(chan chan *Req, 1000)
    go func() {
    // dispatcher
    reqs := MakeHeapOrWhatever()
    for {
    select {
    case r := <-inc:
    reqs.Add(r)
    case c := <-outc:
    c <- reqs.Get()
    case <-time.After(...):
    reqs.Tick()
    }
    }
    }()

    for i := 0; i < 10; i++ {
    go func() {
    // consumer
    c := make(chan *Req, 1)
    for {
    outc <- c
    r := <-c
    Process(r)
    }
    }()
    }

    You also need to nil out outc in dispatcher select when reqs are
    empty, so that you don't receive from outc when you have nothing to
    send.

    Nice!

    Sending over a channel and then picking it up to use is a nice idiom.
    Occam-2 did not have it and I missed it. Occam-pi does, they call it MOBILE
    channels I believe. Go uses this to simulate the missing boolean expresion
    in a guard: in order to close out other goroutines in a specific
    session, a
    session start also sends over a reply channel or channels over which to
    communicate while that session goes on.

    I think this is what you are suggesting. Your code seems to be getting close
    to Peter Welch's model. But you have a timeout in there, which tells me
    there's something wrong with it. I am learning by reading code since I at
    short at writing any - so I'l have to be "clinical" about my analysis.
    I've included timeouts to handle the following overload control policy:
    Assume you want to send TIMEOUT response to requests that are not
    serviced within 1 second.
    If the dispatcher reacts only to input messages and requests from
    workers, and there are no such messages/requests within 1 second, then
    the dispatcher can not send TIMEOUT responses (it does not get control
    within that second). So in the Tick() method you can scan all
    outstanding requests, find very old ones, remove them from the
    container and send TIMEOUT response.

    Provided you did code a "model" of xchan, that's fine. I could do that in
    any channel based language by using what I have. But in the rationale for
    XCHAN that's not the point. The XCHAN by itself, as a primary citizen of the
    language joins asynch/nonblocking and synch/blocking and safe channels and
    breaks cycles to avoid deadlock in one chump - and might save Go from the
    asynchronous hordes out there (..) As any language feature, it's the basic
    need that would decide on whether to include it. Still I don't know if xchan
    is a good idea for Go... but I think I would know how and when I would use
    it if it were in there.
    How and when would you use it?

    "joins asynch/nonblocking and synch/blocking and safe channels and
    breaks cycles to avoid deadlock in one chump" -- this sounds very
    abstract to me. For now I do not see any real-world problems with no
    clean ways to solve in Go w/o xchans.
    You may be right. But my paper does answer these questions in a
    non-abstract way.

    In the "real-world" anything blocking is 'evil'. I infer that Go's
    beautiful blocking channels (even buffered channels block when they are
    full) are considered horrifying by a large group of programmers.

    Blocking in Go in not bad. It is good, because it simplifies programs w/o
    sacrificing performance.
    That "blocking is bad" usually stems either from "old single-thread unix
    world" or from "thread-per-request" world. Their arguments do not hold for
    Go.



    They would shy off. This is the *main* rationale for xchan (the rest is
    the 'abstract' paragraph above). When I programmed alone in occam I didn't
    miss it. I have tried to understand the mostly asynchronous world in some
    of these: http://www.teigfam.net/oyvind/home/technology/.

    Pn => S -> C ->

    I should have said this before: My wish would of course be to see any
    example code that does xchan semantics commented with "xchan idioms" from
    my paper and the figures there. There would be P(roducers) -> S(erver) ->
    C(onsumer) -> some hole. There would be local overflow handling in S and C
    is allowed to block on some hole forever (in which case all is lost). If C
    always gets rid of its data then Pn->S->C will not lose anything. The
    real-world may be in between. There is no dispatcher, as the x-ready
    feedback channel is triggered by the run-time. C is allowed to be read in a
    selective choice with other channels, and changing the input channel in C
    from reading on a chan to an xchan will not change a line in C. S is always
    ready to input data from any Pn (or else as specified). Timeout is a
    special case, not treated by xchan itself (and it should not be). The xchan
    between S and C may have zero buffering. And further: expanding xchan could
    open for feathering (but that I admit is harder with Go since there is no
    conditions in guards).

    Thank you!

    Øyvind

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Øyvind Teig at Jan 16, 2014 at 6:32 am
    kl. 11:36:22 UTC+1 onsdag 15. januar 2014 skrev Dmitry Vyukov følgende:
    On Tue, Jan 14, 2014 at 9:08 PM, Øyvind Teig <[email protected]<javascript:>
    wrote:

    kl. 12:35:38 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov følgende:
    On Tue, Jan 14, 2014 at 3:20 PM, Øyvind Teig <[email protected]>
    wrote:
    kl. 08:23:36 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov
    følgende:
    On Tue, Jan 14, 2014 at 2:07 AM, Øyvind Teig <[email protected]>
    wrote:
    kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov
    følgende:
    I may be missing something, but it seems to me that the use
    case is
    already perfectly covered by non-blocking sends in Go:

    for msg := range inchan {
    select {
    case outchan <- msg:
    default:
    // handle overflow (decide what value(s) to discard)
    }
    }

    Full and overflow are not the same. The fact that a message is
    not
    taken
    by
    the receiver is not the same as an overflow.
    What is the difference? You just need to set buffer size so that
    full==overflow.

    I did try to explain exactly that. Looking at border cases is
    always a
    good
    idea:

    For a zero-buffered channel, in your case sending to a receiver
    that is
    not
    ready will cause overflow (0==0). That's simply wrong semantics,
    the
    sender
    shall block. And as we know blocking is not harmful or evil. The
    else-on-failed-send condition is there simply to be able to poll to
    see
    if
    there is a receiver, nice to have for some cases. And if you don't
    want
    to
    lose that message you'd have to do re-send by timeout or busy-poll.
    That's
    not CSP.
    For an N-buffered channel you say that when the channel is full
    there is
    overflow (N==N). That's also wrong semantics. Overflow happens when
    you
    try
    to fill into a full buffer. (The fact that the buffered channel
    then
    needs
    to keep the old messages since there is no "flush" on a channel
    (and
    there
    should not be..) points toward having full control of the buffer
    and
    just
    move it inside a goroutine. So a zero-buffered chan or xchan in my
    opinion
    is the most useful channel.)

    I probably understand what you want to do. Correct me if I am wrong,
    you want to implement arbitrary complex overload control on top of
    goroutines and channels.
    But, hey, Go has channels of channels, and this gives you natural way
    to say "I want a message. Now!":

    inc := make(chan *Req, 1000)
    outc := make(chan chan *Req, 1000)
    go func() {
    // dispatcher
    reqs := MakeHeapOrWhatever()
    for {
    select {
    case r := <-inc:
    reqs.Add(r)
    case c := <-outc:
    c <- reqs.Get()
    case <-time.After(...):
    reqs.Tick()
    }
    }
    }()

    for i := 0; i < 10; i++ {
    go func() {
    // consumer
    c := make(chan *Req, 1)
    for {
    outc <- c
    r := <-c
    Process(r)
    }
    }()
    }

    You also need to nil out outc in dispatcher select when reqs are
    empty, so that you don't receive from outc when you have nothing to
    send.

    Nice!

    Sending over a channel and then picking it up to use is a nice idiom.
    Occam-2 did not have it and I missed it. Occam-pi does, they call it MOBILE
    channels I believe. Go uses this to simulate the missing boolean expresion
    in a guard: in order to close out other goroutines in a specific
    session, a
    session start also sends over a reply channel or channels over which to
    communicate while that session goes on.

    I think this is what you are suggesting. Your code seems to be getting close
    to Peter Welch's model. But you have a timeout in there, which tells me
    there's something wrong with it. I am learning by reading code since I at
    short at writing any - so I'l have to be "clinical" about my analysis.
    I've included timeouts to handle the following overload control policy:
    Assume you want to send TIMEOUT response to requests that are not
    serviced within 1 second.
    If the dispatcher reacts only to input messages and requests from
    workers, and there are no such messages/requests within 1 second, then
    the dispatcher can not send TIMEOUT responses (it does not get control
    within that second). So in the Tick() method you can scan all
    outstanding requests, find very old ones, remove them from the
    container and send TIMEOUT response.

    Provided you did code a "model" of xchan, that's fine. I could do that in
    any channel based language by using what I have. But in the rationale for
    XCHAN that's not the point. The XCHAN by itself, as a primary citizen of the
    language joins asynch/nonblocking and synch/blocking and safe channels and
    breaks cycles to avoid deadlock in one chump - and might save Go from the
    asynchronous hordes out there (..) As any language feature, it's the basic
    need that would decide on whether to include it. Still I don't know if xchan
    is a good idea for Go... but I think I would know how and when I would use
    it if it were in there.
    How and when would you use it?

    "joins asynch/nonblocking and synch/blocking and safe channels and
    breaks cycles to avoid deadlock in one chump" -- this sounds very
    abstract to me. For now I do not see any real-world problems with no
    clean ways to solve in Go w/o xchans.
    You may be right. But my paper does answer these questions in a
    non-abstract way.

    In the "real-world" anything blocking is 'evil'. I infer that Go's
    beautiful blocking channels (even buffered channels block when they are
    full) are considered horrifying by a large group of programmers.

    Blocking in Go in not bad. It is good, because it simplifies programs w/o
    sacrificing performance.
    That "blocking is bad" usually stems either from "old single-thread unix
    world" or from "thread-per-request" world. Their arguments do not hold for
    Go.
    http://www.teigfam.net/oyvind/home/technology/075-eventual-concurrency/#Notepad.
    Ok?

    They would shy off. This is the *main* rationale for xchan (the rest is the
    'abstract' paragraph above). When I programmed alone in occam I didn't miss
    it. I have tried to understand the mostly asynchronous world in some of
    these: http://www.teigfam.net/oyvind/home/technology/.

    Pn => S -> C ->

    I should have said this before: My wish would of course be to see any
    example code that does xchan semantics commented with "xchan idioms" from
    my paper and the figures there. There would be P(roducers) -> S(erver) ->
    C(onsumer) -> some hole. There would be local overflow handling in S and C
    is allowed to block on some hole forever (in which case all is lost). If C
    always gets rid of its data then Pn->S->C will not lose anything. The
    real-world may be in between. There is no dispatcher, as the x-ready
    feedback channel is triggered by the run-time. C is allowed to be read in a
    selective choice with other channels, and changing the input channel in C
    from reading on a chan to an xchan will not change a line in C. S is always
    ready to input data from any Pn (or else as specified). Timeout is a
    special case, not treated by xchan itself (and it should not be). The xchan
    between S and C may have zero buffering. And further: expanding xchan could
    open for feathering (but that I admit is harder with Go since there is no
    conditions in guards).

    Thank you!

    Øyvind

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to [email protected] <javascript:>.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dmitry Vyukov at Jan 16, 2014 at 6:58 am
    OK
    On Thu, Jan 16, 2014 at 10:32 AM, Øyvind Teig wrote:


    kl. 11:36:22 UTC+1 onsdag 15. januar 2014 skrev Dmitry Vyukov følgende:
    On Tue, Jan 14, 2014 at 9:08 PM, Øyvind Teig wrote:



    kl. 12:35:38 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov følgende:
    On Tue, Jan 14, 2014 at 3:20 PM, Øyvind Teig <[email protected]>
    wrote:
    kl. 08:23:36 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov
    følgende:
    On Tue, Jan 14, 2014 at 2:07 AM, Øyvind Teig
    <[email protected]>
    wrote:
    kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov
    følgende:
    I may be missing something, but it seems to me that the use
    case is
    already perfectly covered by non-blocking sends in Go:

    for msg := range inchan {
    select {
    case outchan <- msg:
    default:
    // handle overflow (decide what value(s) to discard)
    }
    }

    Full and overflow are not the same. The fact that a message is
    not
    taken
    by
    the receiver is not the same as an overflow.
    What is the difference? You just need to set buffer size so that
    full==overflow.

    I did try to explain exactly that. Looking at border cases is
    always a
    good
    idea:

    For a zero-buffered channel, in your case sending to a receiver
    that is
    not
    ready will cause overflow (0==0). That's simply wrong semantics,
    the
    sender
    shall block. And as we know blocking is not harmful or evil. The
    else-on-failed-send condition is there simply to be able to poll to
    see
    if
    there is a receiver, nice to have for some cases. And if you don't
    want
    to
    lose that message you'd have to do re-send by timeout or busy-poll.
    That's
    not CSP.
    For an N-buffered channel you say that when the channel is full
    there is
    overflow (N==N). That's also wrong semantics. Overflow happens when
    you
    try
    to fill into a full buffer. (The fact that the buffered channel
    then
    needs
    to keep the old messages since there is no "flush" on a channel
    (and
    there
    should not be..) points toward having full control of the buffer
    and
    just
    move it inside a goroutine. So a zero-buffered chan or xchan in my
    opinion
    is the most useful channel.)

    I probably understand what you want to do. Correct me if I am wrong,
    you want to implement arbitrary complex overload control on top of
    goroutines and channels.
    But, hey, Go has channels of channels, and this gives you natural way
    to say "I want a message. Now!":

    inc := make(chan *Req, 1000)
    outc := make(chan chan *Req, 1000)
    go func() {
    // dispatcher
    reqs := MakeHeapOrWhatever()
    for {
    select {
    case r := <-inc:
    reqs.Add(r)
    case c := <-outc:
    c <- reqs.Get()
    case <-time.After(...):
    reqs.Tick()
    }
    }
    }()

    for i := 0; i < 10; i++ {
    go func() {
    // consumer
    c := make(chan *Req, 1)
    for {
    outc <- c
    r := <-c
    Process(r)
    }
    }()
    }

    You also need to nil out outc in dispatcher select when reqs are
    empty, so that you don't receive from outc when you have nothing to
    send.

    Nice!

    Sending over a channel and then picking it up to use is a nice idiom.
    Occam-2 did not have it and I missed it. Occam-pi does, they call it
    MOBILE
    channels I believe. Go uses this to simulate the missing boolean
    expresion
    in a guard: in order to close out other goroutines in a specific
    session, a
    session start also sends over a reply channel or channels over which
    to
    communicate while that session goes on.

    I think this is what you are suggesting. Your code seems to be getting
    close
    to Peter Welch's model. But you have a timeout in there, which tells
    me
    there's something wrong with it. I am learning by reading code since I
    at
    short at writing any - so I'l have to be "clinical" about my analysis.
    I've included timeouts to handle the following overload control policy:
    Assume you want to send TIMEOUT response to requests that are not
    serviced within 1 second.
    If the dispatcher reacts only to input messages and requests from
    workers, and there are no such messages/requests within 1 second, then
    the dispatcher can not send TIMEOUT responses (it does not get control
    within that second). So in the Tick() method you can scan all
    outstanding requests, find very old ones, remove them from the
    container and send TIMEOUT response.

    Provided you did code a "model" of xchan, that's fine. I could do that
    in
    any channel based language by using what I have. But in the rationale
    for
    XCHAN that's not the point. The XCHAN by itself, as a primary citizen
    of the
    language joins asynch/nonblocking and synch/blocking and safe channels
    and
    breaks cycles to avoid deadlock in one chump - and might save Go from
    the
    asynchronous hordes out there (..) As any language feature, it's the
    basic
    need that would decide on whether to include it. Still I don't know if
    xchan
    is a good idea for Go... but I think I would know how and when I would
    use
    it if it were in there.
    How and when would you use it?

    "joins asynch/nonblocking and synch/blocking and safe channels and
    breaks cycles to avoid deadlock in one chump" -- this sounds very
    abstract to me. For now I do not see any real-world problems with no
    clean ways to solve in Go w/o xchans.

    You may be right. But my paper does answer these questions in a
    non-abstract way.

    In the "real-world" anything blocking is 'evil'. I infer that Go's
    beautiful blocking channels (even buffered channels block when they are
    full) are considered horrifying by a large group of programmers.


    Blocking in Go in not bad. It is good, because it simplifies programs w/o
    sacrificing performance.
    That "blocking is bad" usually stems either from "old single-thread unix
    world" or from "thread-per-request" world. Their arguments do not hold for
    Go.

    http://www.teigfam.net/oyvind/home/technology/075-eventual-concurrency/#Notepad.
    Ok?
    They would shy off. This is the main rationale for xchan (the rest is the
    'abstract' paragraph above). When I programmed alone in occam I didn't miss
    it. I have tried to understand the mostly asynchronous world in some of
    these: http://www.teigfam.net/oyvind/home/technology/.

    Pn => S -> C ->

    I should have said this before: My wish would of course be to see any
    example code that does xchan semantics commented with "xchan idioms" from my
    paper and the figures there. There would be P(roducers) -> S(erver) ->
    C(onsumer) -> some hole. There would be local overflow handling in S and C
    is allowed to block on some hole forever (in which case all is lost). If C
    always gets rid of its data then Pn->S->C will not lose anything. The
    real-world may be in between. There is no dispatcher, as the x-ready
    feedback channel is triggered by the run-time. C is allowed to be read in a
    selective choice with other channels, and changing the input channel in C
    from reading on a chan to an xchan will not change a line in C. S is always
    ready to input data from any Pn (or else as specified). Timeout is a special
    case, not treated by xchan itself (and it should not be). The xchan between
    S and C may have zero buffering. And further: expanding xchan could open for
    feathering (but that I admit is harder with Go since there is no conditions
    in guards).

    Thank you!

    Øyvind

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to [email protected].

    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Eapache at Jan 13, 2014 at 3:03 pm
    Xchan looks like a neat concept, but I don't have the time/inclination to implement it myself at the moment.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Øyvind Teig at Jan 14, 2014 at 8:15 am
    kl. 16:03:55 UTC+1 mandag 13. januar 2014 skrev Evan Huus følgende:
    Xchan looks like a neat concept, but I don't have the time/inclination to
    implement it myself at the moment.

    Ok. And besides, you might have to get into the Go scheduler to implement a
    proper ("classic") xchan... (how's that for an inclination..). (But that
    would just screw up Go, xchan isn't there by design).

    But then, Peter Welch's model does it a little differently
    ("pre-confirmed"), where it *might* be possible to do it without the
    scheduler's immediate help. But the latter probably makes 'feathering' (not
    sending not-interesting messages) not possible.

        - "Classic" and "pre-confirmed": "Names of XCHAN Implementations" at
        http://www.wotug.org/paperdb/show_proc.php?f=4&num=30
        - Peter Welch's model in occam-pi:
        https://www.cs.kent.ac.uk/research/groups/plas/wiki/An_occam_Model_of_XCHANs

    Øyvind


    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Martin Schnabel at Jan 13, 2014 at 8:47 pm

    On 01/09/2014 06:59 PM, [email protected] wrote:
    Based on this idea and sample code I ended up writing an entire package
    that implements a bunch of related ideas in this area:

    https://github.com/eapache/channels
    https://godoc.org/github.com/eapache/channels

    It includes channels with "infinite" buffers channels with
    finite-but-resizable buffers and a bunch of other useful types and
    functions.
    just took a look at the package. in all channel implementations that use
    a buffer you slice the buffer[1:] when sending but never readjust the
    buffer. this means buffer will grow indefinatly with every
    append/receive. this is very much broken.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Evan Huus at Jan 13, 2014 at 8:56 pm

    On Monday, January 13, 2014 3:47:42 PM UTC-5, mb0 wrote:
    On 01/09/2014 06:59 PM, [email protected] <javascript:> wrote:
    Based on this idea and sample code I ended up writing an entire package
    that implements a bunch of related ideas in this area:

    https://github.com/eapache/channels
    https://godoc.org/github.com/eapache/channels

    It includes channels with "infinite" buffers channels with
    finite-but-resizable buffers and a bunch of other useful types and
    functions.
    just took a look at the package. in all channel implementations that use
    a buffer you slice the buffer[1:] when sending but never readjust the
    buffer. this means buffer will grow indefinatly with every
    append/receive. this is very much broken.
    No it's not, due to the convenient behaviour of the append() function. As
    per [1], when appending to a full slice it allocates a new slice of
    necessary size, copies the elements, and returns the new slice. The old
    slice (with all the "stale" elements) can then be garbage-collected. This
    leads to very simple code and nice amortized run-time costs. A linked-list
    behaviour would provide guaranteed constant-time cost, but would kill the
    garbage-collector.

    [1] http://blog.golang.org/slices

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Simon place at Jan 13, 2014 at 9:50 pm
    "The old slice (with all the "stale" elements) can then be
    garbage-collected."

    AFAIK slices don't have any elements, their underlying array does, and
    append beyond capacity creates an EXTENDED copy, you could have all the
    original slices go and so their array GC'ed, but the new one contains a
    copy of all the "stale" elements, even if no slice remains that refers to
    any of elements from prior to the append point.

    so as long as there is a slice, any of its derived slices, or any from an
    append of any of them, nothing really goes away.(except some duplicates)

    but, by doing your own special append, copying only what's needed to a new
    array and making the slices on that, you could do it.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Evan Huus at Jan 13, 2014 at 10:06 pm

    On Monday, January 13, 2014 4:50:51 PM UTC-5, simon place wrote:
    "The old slice (with all the "stale" elements) can then be
    garbage-collected."

    AFAIK slices don't have any elements, their underlying array does
    Yes, sorry, I misspoke.

    and append beyond capacity creates an EXTENDED copy
    No, it only creates a copy of those elements still referenced by the slice,
    so the copy does *not* contain the stale elements. This is implied by
    http://blog.golang.org/slices and can be easily tested, eg
    http://play.golang.org/p/QZN8ZPV9-V only ever uses a few MB of memory even
    when 10 million integers (~40MB) have been transferred.

    (unfortunately the playground won't allow 3rd-party imports so you can't
    run that online, but you can copy-paste it locally to verify)

    you could have all the original slices go and so their array GC'ed, but the
    new one contains a copy of all the "stale" elements, even if no slice
    remains that refers to any of elements from prior to the append point.

    so as long as there is a slice, any of its derived slices, or any from an
    append of any of them, nothing really goes away.(except some duplicates)

    but, by doing your own special append, copying only what's needed to a new
    array and making the slices on that, you could do it.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Simon place at Jan 14, 2014 at 12:14 am
    i see now, the source slice is copied not the array, so the new array
    starts from the index of the start of the source slice, and you cant' even
    get to the array from the slice.


    func Append(slice []int, elements ...int) []int { n := len(slice) total := len(slice) + len(elements) if total > cap(slice) { // Reallocate. Grow to 1.5 times the new size, so we can still grow. newSize := total*3/2 + 1 newSlice := make([]int, total, newSize) copy(newSlice, slice) slice = newSlice } slice = slice[:total] copy(slice[n:], elements) return slice}



    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Kyle Lemons at Jan 13, 2014 at 8:58 pm

    On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel wrote:
    On 01/09/2014 06:59 PM, [email protected] wrote:

    Based on this idea and sample code I ended up writing an entire package
    that implements a bunch of related ideas in this area:

    https://github.com/eapache/channels
    https://godoc.org/github.com/eapache/channels

    It includes channels with "infinite" buffers channels with
    finite-but-resizable buffers and a bunch of other useful types and
    functions.
    just took a look at the package. in all channel implementations that use a
    buffer you slice the buffer[1:] when sending but never readjust the buffer.
    this means buffer will grow indefinatly with every append/receive. this is
    very much broken.

    It won't grow indefinitely. When it needs to reallocate, it will get a new
    buffer with only the elements in the slice (plus the additional capacity).

    http://play.golang.org/p/yKiLdet-0m

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Martin Schnabel at Jan 13, 2014 at 9:01 pm

    On 01/13/2014 09:58 PM, Kyle Lemons wrote:
    On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel wrote:

    On 01/09/2014 06:59 PM, [email protected] wrote:

    Based on this idea and sample code I ended up writing an entire
    package
    that implements a bunch of related ideas in this area:

    https://github.com/eapache/__channels
    <https://github.com/eapache/channels>
    https://godoc.org/github.com/__eapache/channels
    <https://godoc.org/github.com/eapache/channels>

    It includes channels with "infinite" buffers channels with
    finite-but-resizable buffers and a bunch of other useful types and
    functions.


    just took a look at the package. in all channel implementations that
    use a buffer you slice the buffer[1:] when sending but never
    readjust the buffer. this means buffer will grow indefinatly with
    every append/receive. this is very much broken.


    It won't grow indefinitely. When it needs to reallocate, it will get a
    new buffer with only the elements in the slice (plus the additional
    capacity).

    http://play.golang.org/p/yKiLdet-0m
    ok thanks, i stand corrected. for some reason it never that never
    occurred to me.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Aroman at Jan 13, 2014 at 9:32 pm

    On Monday, January 13, 2014 12:58:28 PM UTC-8, Kyle Lemons wrote:
    On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel <[email protected]<javascript:>
    wrote:
    On 01/09/2014 06:59 PM, [email protected] <javascript:> wrote:

    Based on this idea and sample code I ended up writing an entire package
    that implements a bunch of related ideas in this area:

    https://github.com/eapache/channels
    https://godoc.org/github.com/eapache/channels

    It includes channels with "infinite" buffers channels with
    finite-but-resizable buffers and a bunch of other useful types and
    functions.
    just took a look at the package. in all channel implementations that use
    a buffer you slice the buffer[1:] when sending but never readjust the
    buffer. this means buffer will grow indefinatly with every append/receive.
    this is very much broken.

    It won't grow indefinitely. When it needs to reallocate, it will get a
    new buffer with only the elements in the slice (plus the additional
    capacity).

    http://play.golang.org/p/yKiLdet-0m
    That's not the same, that's truncating the slice:
        a = a[:1]

    But if you naively use the slice as a FIFO:
        el, a = a[0], a[1:]

    ...then you have the memory problems, since you are continuously shifting
    your usage along a slice: http://play.golang.org/p/mJJZO8iiEA
    The runtime is forced to continuously reallocate and GC old slices.

    mb0 is correct.

    - Augusto

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to [email protected] <javascript:>.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Evan Huus at Jan 13, 2014 at 9:37 pm

    On Monday, January 13, 2014 4:32:27 PM UTC-5, [email protected] wrote:
    On Monday, January 13, 2014 12:58:28 PM UTC-8, Kyle Lemons wrote:
    On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel wrote:
    On 01/09/2014 06:59 PM, [email protected] wrote:

    Based on this idea and sample code I ended up writing an entire package
    that implements a bunch of related ideas in this area:

    https://github.com/eapache/channels
    https://godoc.org/github.com/eapache/channels

    It includes channels with "infinite" buffers channels with
    finite-but-resizable buffers and a bunch of other useful types and
    functions.
    just took a look at the package. in all channel implementations that use
    a buffer you slice the buffer[1:] when sending but never readjust the
    buffer. this means buffer will grow indefinatly with every append/receive.
    this is very much broken.

    It won't grow indefinitely. When it needs to reallocate, it will get a
    new buffer with only the elements in the slice (plus the additional
    capacity).

    http://play.golang.org/p/yKiLdet-0m
    That's not the same, that's truncating the slice:
    a = a[:1]

    But if you naively use the slice as a FIFO:
    el, a = a[0], a[1:]

    ...then you have the memory problems, since you are continuously shifting
    your usage along a slice: http://play.golang.org/p/mJJZO8iiEA
    The runtime is forced to continuously reallocate and GC old slices.
    This is all true, but the only alternative is to implement a
    linked-list-type structure whose many small allocations is actually more
    expensive for the GC to deal with.

    mb0 is correct.
    Not really. They were concerned that the old slices would never be GCed and
    so memory would leak, which is not the case.

    - Augusto

    --
    You received this message because you are subscribed to the Google
    Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Aroman at Jan 13, 2014 at 10:05 pm

    On Monday, January 13, 2014 1:37:39 PM UTC-8, Evan Huus wrote:
    On Monday, January 13, 2014 4:32:27 PM UTC-5, [email protected] wrote:
    On Monday, January 13, 2014 12:58:28 PM UTC-8, Kyle Lemons wrote:
    On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel wrote:
    On 01/09/2014 06:59 PM, [email protected] wrote:

    Based on this idea and sample code I ended up writing an entire package
    that implements a bunch of related ideas in this area:

    https://github.com/eapache/channels
    https://godoc.org/github.com/eapache/channels

    It includes channels with "infinite" buffers channels with
    finite-but-resizable buffers and a bunch of other useful types and
    functions.
    just took a look at the package. in all channel implementations that
    use a buffer you slice the buffer[1:] when sending but never readjust the
    buffer. this means buffer will grow indefinatly with every append/receive.
    this is very much broken.

    It won't grow indefinitely. When it needs to reallocate, it will get a
    new buffer with only the elements in the slice (plus the additional
    capacity).

    http://play.golang.org/p/yKiLdet-0m
    That's not the same, that's truncating the slice:
    a = a[:1]

    But if you naively use the slice as a FIFO:
    el, a = a[0], a[1:]

    ...then you have the memory problems, since you are continuously shifting
    your usage along a slice: http://play.golang.org/p/mJJZO8iiEA
    The runtime is forced to continuously reallocate and GC old slices.
    This is all true, but the only alternative is to implement a
    linked-list-type structure whose many small allocations is actually more
    expensive for the GC to deal with.
    I see. Another alternative is to maintain two slices. The active slice
    that you are slicing as you are removing elements, and an overflow slice
    that can grow as necessary. When the active slice is empty, swap the two.
      By keeping track of the original active slice, you can re-use memory
    efficiently but still allow infinite FIFO growing without producing garbage:

    http://play.golang.org/p/wr8pHkg2r1

    mb0 is correct.
    Not really. They were concerned that the old slices would never be GCed
    and so memory would leak, which is not the case.
    Thanks, I misunderstood.

    - Augusto

    --
    You received this message because you are subscribed to the Google
    Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Evan Huus at Jan 13, 2014 at 10:10 pm

    On Monday, January 13, 2014 5:04:59 PM UTC-5, [email protected] wrote:
    On Monday, January 13, 2014 1:37:39 PM UTC-8, Evan Huus wrote:
    On Monday, January 13, 2014 4:32:27 PM UTC-5, [email protected] wrote:
    On Monday, January 13, 2014 12:58:28 PM UTC-8, Kyle Lemons wrote:
    On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel wrote:
    On 01/09/2014 06:59 PM, [email protected] wrote:

    Based on this idea and sample code I ended up writing an entire
    package
    that implements a bunch of related ideas in this area:

    https://github.com/eapache/channels
    https://godoc.org/github.com/eapache/channels

    It includes channels with "infinite" buffers channels with
    finite-but-resizable buffers and a bunch of other useful types and
    functions.
    just took a look at the package. in all channel implementations that
    use a buffer you slice the buffer[1:] when sending but never readjust the
    buffer. this means buffer will grow indefinatly with every append/receive.
    this is very much broken.

    It won't grow indefinitely. When it needs to reallocate, it will get a
    new buffer with only the elements in the slice (plus the additional
    capacity).

    http://play.golang.org/p/yKiLdet-0m
    That's not the same, that's truncating the slice:
    a = a[:1]

    But if you naively use the slice as a FIFO:
    el, a = a[0], a[1:]

    ...then you have the memory problems, since you are continuously
    shifting your usage along a slice: http://play.golang.org/p/mJJZO8iiEA
    The runtime is forced to continuously reallocate and GC old slices.
    This is all true, but the only alternative is to implement a
    linked-list-type structure whose many small allocations is actually more
    expensive for the GC to deal with.
    I see. Another alternative is to maintain two slices. The active slice
    that you are slicing as you are removing elements, and an overflow slice
    that can grow as necessary. When the active slice is empty, swap the two.
    By keeping track of the original active slice, you can re-use memory
    efficiently but still allow infinite FIFO growing without producing garbage:

    http://play.golang.org/p/wr8pHkg2r1
    Huh, neat idea. The only potential drawback is that they don't shrink back
    down when the number of buffered items shrinks. I'd be interested in seeing
    benchmarks of two approaches though.

    mb0 is correct.
    Not really. They were concerned that the old slices would never be GCed
    and so memory would leak, which is not the case.
    Thanks, I misunderstood.

    - Augusto

    --
    You received this message because you are subscribed to the Google
    Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Kyle Lemons at Jan 13, 2014 at 10:50 pm

    On Mon, Jan 13, 2014 at 2:10 PM, Evan Huus wrote:
    On Monday, January 13, 2014 5:04:59 PM UTC-5, [email protected] wrote:
    On Monday, January 13, 2014 1:37:39 PM UTC-8, Evan Huus wrote:
    On Monday, January 13, 2014 4:32:27 PM UTC-5, [email protected] wrote:
    On Monday, January 13, 2014 12:58:28 PM UTC-8, Kyle Lemons wrote:
    On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel wrote:
    On 01/09/2014 06:59 PM, [email protected] wrote:

    Based on this idea and sample code I ended up writing an entire
    package
    that implements a bunch of related ideas in this area:

    https://github.com/eapache/channels
    https://godoc.org/github.com/eapache/channels

    It includes channels with "infinite" buffers channels with
    finite-but-resizable buffers and a bunch of other useful types and
    functions.
    just took a look at the package. in all channel implementations that
    use a buffer you slice the buffer[1:] when sending but never readjust the
    buffer. this means buffer will grow indefinatly with every append/receive.
    this is very much broken.

    It won't grow indefinitely. When it needs to reallocate, it will get
    a new buffer with only the elements in the slice (plus the additional
    capacity).

    http://play.golang.org/p/yKiLdet-0m
    That's not the same, that's truncating the slice:
    a = a[:1]

    But if you naively use the slice as a FIFO:
    el, a = a[0], a[1:]

    ...then you have the memory problems, since you are continuously
    shifting your usage along a slice: http://play.golang.org/p/mJJZO8iiEA
    The runtime is forced to continuously reallocate and GC old slices.
    This is all true, but the only alternative is to implement a
    linked-list-type structure whose many small allocations is actually more
    expensive for the GC to deal with.
    I see. Another alternative is to maintain two slices. The active slice
    that you are slicing as you are removing elements, and an overflow slice
    that can grow as necessary. When the active slice is empty, swap the two.
    By keeping track of the original active slice, you can re-use memory
    efficiently but still allow infinite FIFO growing without producing garbage:

    http://play.golang.org/p/wr8pHkg2r1
    Huh, neat idea. The only potential drawback is that they don't shrink back
    down when the number of buffered items shrinks. I'd be interested in seeing
    benchmarks of two approaches though.
    Apologies for the bug in my previous example.

    The capacity of reallocated buffers definitely does decrease. It's based
    on the size of the slice, not the number of elements that were originally
    in the underlying array. Because of this, slices actually do work quite
    well as a queue, automatically handling allocation for bursts of data and
    shrinking the footprint back down when the spike is gone.

    http://play.golang.org/p/96AFJW_i9T

    Notice that the capacities and the number of extra elements available for
    future data increase during the burst and come back down after.

    mb0 is correct.
    Not really. They were concerned that the old slices would never be GCed
    and so memory would leak, which is not the case.
    Thanks, I misunderstood.

    - Augusto

    --
    You received this message because you are subscribed to the Google
    Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it,
    send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedJan 10, '14 at 4:34a
activeJan 16, '14 at 6:58a
posts31
users7
websitegolang.org

People

Translate

site design / logo © 2023 Grokbase