FAQ
Hi,

I am trying to implement a cache which can have expiration for keys. As I
know there are some implementations and most of them are using RWMutex to
protect the storage. I try to implement it in a channel mux way which is
running a mux routine to accept the cache operations like set, del, get,
expire and etc. since Effective Go says "Do not communicate by sharing
memory; instead, share memory by communicating."

I implemented a RWMutex version to compare, in my experiments, when I set
500000 random keys and get 250000 keys, the RWMutex is much better to
Channel (857 milliseconds vs 1613 milliseconds) and also RWMutex took less
memory than channels. I think the Read Lock is much better and cheaper than
the Get operation queued up inside the mux goroutine.

Any suggestions if we still like the channel mux way? Just curious about
this, :)

My experiment codes can be found at https://github.com/mijia/cache
Thank you very much.

Best and Regards,
Jia Mi

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

  • Dmitry Vyukov at Dec 19, 2014 at 1:17 pm

    On Fri, Dec 19, 2014 at 4:03 PM, Jia Mi wrote:
    Hi,

    I am trying to implement a cache which can have expiration for keys. As I
    know there are some implementations and most of them are using RWMutex to
    protect the storage. I try to implement it in a channel mux way which is
    running a mux routine to accept the cache operations like set, del, get,
    expire and etc. since Effective Go says "Do not communicate by sharing
    memory; instead, share memory by communicating."

    I implemented a RWMutex version to compare, in my experiments, when I set
    500000 random keys and get 250000 keys, the RWMutex is much better to
    Channel (857 milliseconds vs 1613 milliseconds) and also RWMutex took less
    memory than channels. I think the Read Lock is much better and cheaper than
    the Get operation queued up inside the mux goroutine.

    Any suggestions if we still like the channel mux way? Just curious about
    this, :)
    No, we don't like chan mux. It is not a communication with a
    concurrent activity, it is access to a passive resource.

    Also if you benchmark it on a parallel machine with plenty of cores,
    RWMutex will be waaaaaay faster.
    My experiment codes can be found at https://github.com/mijia/cache
    Thank you very much.

    Best and Regards,
    Jia Mi

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Dmitry Vyukov at Dec 19, 2014 at 1:18 pm
    Also you have a bad data race there:

    WARNING: DATA RACE
    Write by goroutine 8:
       github.com/mijia/cache.(*Cache).initCleanup()
           github.com/mijia/cache/cache.go:105 +0x192

    Previous read by goroutine 5:
       github.com/mijia/cache.(*Cache).mux()
           github.com/mijia/cache/cache.go:148 +0x966

    Goroutine 8 (running) created at:
       github.com/mijia/cache.(*Cache).mux()
           github.com/mijia/cache/cache.go:111 +0x51

    Goroutine 5 (running) created at:
       github.com/mijia/cache.NewCache()
           github.com/mijia/cache/cache.go:174 +0x207
       github.com/mijia/cache.init·1()
           github.com/mijia/cache/cache_test.go:116 +0x47
       github.com/mijia/cache.init()
           github.com/mijia/cache/cache_test.go:119 +0xaa
       main.init()
           github.com/mijia/cache/_test/_testmain.go:60 +0x8c


    On Fri, Dec 19, 2014 at 4:16 PM, Dmitry Vyukov wrote:
    On Fri, Dec 19, 2014 at 4:03 PM, Jia Mi wrote:
    Hi,

    I am trying to implement a cache which can have expiration for keys. As I
    know there are some implementations and most of them are using RWMutex to
    protect the storage. I try to implement it in a channel mux way which is
    running a mux routine to accept the cache operations like set, del, get,
    expire and etc. since Effective Go says "Do not communicate by sharing
    memory; instead, share memory by communicating."

    I implemented a RWMutex version to compare, in my experiments, when I set
    500000 random keys and get 250000 keys, the RWMutex is much better to
    Channel (857 milliseconds vs 1613 milliseconds) and also RWMutex took less
    memory than channels. I think the Read Lock is much better and cheaper than
    the Get operation queued up inside the mux goroutine.

    Any suggestions if we still like the channel mux way? Just curious about
    this, :)
    No, we don't like chan mux. It is not a communication with a
    concurrent activity, it is access to a passive resource.

    Also if you benchmark it on a parallel machine with plenty of cores,
    RWMutex will be waaaaaay faster.
    My experiment codes can be found at https://github.com/mijia/cache
    Thank you very much.

    Best and Regards,
    Jia Mi

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Jia Mi at Dec 19, 2014 at 4:18 pm
    Hi Dmitry,

    Thank you very much for the reply. We will definitely not use the Chan mux
    in production in this case since the result shows overwhelming benefits
    from RWMutex. I did some resource pool management done by Chan mux before,
    so I am wondering if channel mux would be also suitable for cache
    management.

    I think now I got your point, resource pool is a kind of passive resource,
    but cache is not. Hope I got it right, :)

    Thank you again, really help me a lot to make this clear.

    Best And regards,
    Jia

    On 2014年12月19日周五 21:16 Dmitry Vyukov wrote:


      No, we don't like chan mux. It is not a communication with a
    concurrent activity, it is access to a passive resource.

    Also if you benchmark it on a parallel machine with plenty of cores,
    RWMutex will be waaaaaay faster.
    My experiment codes can be found at https://github.com/mijia/cache
    Thank you very much.

    Best and Regards,
    Jia Mi

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedDec 19, '14 at 1:03p
activeDec 19, '14 at 4:18p
posts4
users2
websitegolang.org

2 users in discussion

Dmitry Vyukov: 2 posts Jia Mi: 2 posts

People

Translate

site design / logo © 2022 Grokbase