FAQ
Hi all,

I'm trying to write a program to manage a shared resource for which I need
blocking function call semantics on the logic handling the resource. E.g.
if I want to implement a hit counter for a website, I would need a blocking
`increment_counter()` method which would update the counter and return me
the new value. I was using a mutex for my application, but that wasn't
working out well.

The other approach I tried was to have a goroutine manager the counter as
its internal state. To provide blocking function call semantics, I also
pass it a "response channel" along with other arguments. The manager
goroutine must manipulate the counter state and return the updated value on
that "response channel". Here's an example of how I'm using the "response
channel" for this: http://play.golang.org/p/nqfAeVzeOj

Is this a recommended pattern for managing shared state with Go? Should I
be doing a higher level refactoring to get around the requirement of
blocking function call semantics instead?

Thanks.

PS: Please excuse me if this has been discussed before. I just couldn't
figure out what the right search terms should be.

Search Discussions

  • Rémy Oudompheng at Sep 11, 2012 at 6:27 am

    On 2012/9/11 Tahir Hashmi wrote:
    Hi all,

    I'm trying to write a program to manage a shared resource for which I need
    blocking function call semantics on the logic handling the resource. E.g. if
    I want to implement a hit counter for a website, I would need a blocking
    `increment_counter()` method which would update the counter and return me
    the new value. I was using a mutex for my application, but that wasn't
    working out well.
    This is surprising. Why wasn't it working ?

    Rémy.
  • Tahir Hashmi at Sep 11, 2012 at 7:06 am

    On Tuesday, September 11, 2012 11:57:43 AM UTC+5:30, Rémy Oudompheng wrote:
    This is surprising. Why wasn't it working ?

    Rémy.
    One of the reasons why this was not working was because I was writing the
    functions that need to be synchronised as:

    func foo {
    mutex.Lock()
    defer mutex.Unlock()

    // do whatever else
    }

    The coarse-grained locking meant that I had some conditions under which
    foo() would call bar(), which too needed to be synchronised on the same
    mutex, leading to a deadlock. I did move the lock/unlock bits to only wrap
    the critical sections, but the application was still freezing under high
    load. I later decided to extract all the shared state into its own handlers
    and use channels to communicate with the handlers so that I shared state by
    communicating, rather than communicating through shared state.

    --
    Tahir Hashmi
  • Maarten Koopmans at Sep 11, 2012 at 9:35 am
    Tahir,

    I'm new to go bit have done a lot of "messaging" (e.g. actors in Scala).

    I *think* you need to change your thinking on this: forget about blocking
    function call semantics. What you want (I think) is to protect one or more
    resources, i.e. give only "atomic" access?

    If that's the case then your solution would be:

    1) a unique goroutine that reads from a channel and then performs logic
    atomically (update the webcounter)
    2) concurrent clients that write to the channel (trigger an update)

    If the clients needs a return value it can pass in a channel in the request
    which is unique to that particular client. This will have the effect of
    blocking on the client goroutine as well (when it waits for the results to
    come back).

    What helps me with Go is if I think of goroutines of logic on top of queues
    (channels).

    --Maarten

    Op dinsdag 11 september 2012 09:00:19 UTC+2 schreef Tahir Hashmi het
    volgende:
    On Tuesday, September 11, 2012 11:57:43 AM UTC+5:30, Rémy Oudompheng wrote:

    This is surprising. Why wasn't it working ?

    Rémy.
    One of the reasons why this was not working was because I was writing the
    functions that need to be synchronised as:

    func foo {
    mutex.Lock()
    defer mutex.Unlock()

    // do whatever else
    }

    The coarse-grained locking meant that I had some conditions under which
    foo() would call bar(), which too needed to be synchronised on the same
    mutex, leading to a deadlock. I did move the lock/unlock bits to only wrap
    the critical sections, but the application was still freezing under high
    load. I later decided to extract all the shared state into its own handlers
    and use channels to communicate with the handlers so that I shared state by
    communicating, rather than communicating through shared state.

    --
    Tahir Hashmi
  • Tahir Hashmi at Sep 11, 2012 at 12:36 pm

    On Tuesday, September 11, 2012 3:05:52 PM UTC+5:30, Maarten Koopmans wrote:
    I *think* you need to change your thinking on this: forget about blocking
    function call semantics. What you want (I think) is to protect one or more
    resources, i.e. give only "atomic" access?
    Yes, that's what I need.

    If that's the case then your solution would be:

    1) a unique goroutine that reads from a channel and then performs logic
    atomically (update the webcounter)
    2) concurrent clients that write to the channel (trigger an update)

    If the clients needs a return value it can pass in a channel in the
    request which is unique to that particular client. This will have the
    effect of blocking on the client goroutine as well (when it waits for the
    results to come back).
    That's exactly how I have also solved the problem now where the client
    needs the return value.

    --
    Tahir Hashmi
  • Jan Mercl at Sep 11, 2012 at 10:01 am

    On Tue, Sep 11, 2012 at 7:40 AM, Tahir Hashmi wrote:
    The other approach I tried was to have a goroutine manager the counter as
    its internal state. To provide blocking function call semantics, I also pass
    it a "response channel" along with other arguments. The manager goroutine
    must manipulate the counter state and return the updated value on that
    "response channel". Here's an example of how I'm using the "response
    channel" for this: http://play.golang.org/p/nqfAeVzeOj

    Is this a recommended pattern for managing shared state with Go? Should I be
    doing a higher level refactoring to get around the requirement of blocking
    function call semantics instead?
    I would do something like: http://play.golang.org/p/j8B_rKbxcW

    -j
  • Paul at Sep 11, 2012 at 11:40 am
    *"Is this a recommended pattern for managing shared state with Go?"*

    From effective Go: http://golang.org/doc/effective_go.html#sharing

    *Concurrent programming in many environments is made difficult by the
    subtleties required to implement correct access to shared variables. Go
    encourages a different approach in which shared values are passed around on
    channels and, in fact, never actively shared by separate threads of
    execution. Only one goroutine has access to the value at any given time.
    Data races cannot occur, by design. To encourage this way of thinking we
    have reduced it to a slogan:
    *

    *Do not communicate by sharing memory; instead, share memory by
    communicating*.


    How to reduce bottlnecks on specific resources is maybe yet another topic,
    you might create more than one counter for example (sharding), and
    aggregage totals later on, I also see the need for synchronization
    specifically only for the* write operations* on shared resources. Reading
    should be easy anytime without sychronization issues, I am saying this
    generally without any real knowledge of your application design.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedSep 11, '12 at 5:40a
activeSep 11, '12 at 12:36p
posts7
users5
websitegolang.org

People

Translate

site design / logo © 2022 Grokbase