FAQ
Hi Sugu,

I've benchmarked both vitess' library and ours. However, keep in mind that
while vitess' library just open a connection, our library does also
connection pooling. This means that even when using a single thread,
there's some overhead in locking and map lookups. To get an apples to
apples comparison someone would need to benchmark your "upper layer", where
you're also caching and reusing the connections. I also omitted the
concurrent benchmarks, since that are not applicable to your library. With
that said, the results are:

benchmark old ns/op new ns/op delta

BenchmarkSetGet 144417 138200 -4.30%

BenchmarkSetGetLarge 165900 146594 -11.64%

benchmark old MB/s new MB/s speedup

BenchmarkSetGet 0.04 0.04 1.00x

BenchmarkSetGetLarge 7.62 8.62 1.13x

benchmark old allocs new allocs delta

BenchmarkSetGet 10 6 -40.00%

BenchmarkSetGetLarge 10 6 -40.00%

benchmark old bytes new bytes delta

BenchmarkSetGet 211 170 -19.43%

BenchmarkSetGetLarge 1594 1184 -25.72%


As you can see, our library is a bit more efficient that yours, but not by
that much.

Regards,
Alberto

P.S.: Benchmark code attached
On Thursday, October 31, 2013 1:14:51 AM UTC+1, Sugu Sougoumarane wrote:

I've been meaning to compare vitess's implementation
https://github.com/youtube/vitess/tree/master/go/memcache against brad's
to see if we could switch over to using his instead. I just haven't gotten
around to it.
What we have is kind of barebones, with no sharding, etc.
Can you check if you could change your benchmark to try ours?
On Wednesday, October 30, 2013 9:52:47 AM UTC-7, bradfitz wrote:

Hopefully I didn't ignore any pull requests, but it's likely I did on
accident. I get so much email from github (some of which I haven't found
out how to disable), so I just now filter all github mail to a label that I
now forget to look at regularly. Which means I'm slow seeing good pull
requests too.



On Wed, Oct 30, 2013 at 9:36 AM, Alberto García Hierro <
alb...@garciahierro.com> wrote:
Thanks! I'd like to clarify that we have no intentions to submit a PR to
the original project, because Go's team favors code simplicity over
performance most of the time and I'm pretty sure our patches will be
rejected. We will, however, maintain our fork for the foreseeable future
and we're also open to accepting PRs from other collaborators.

Regards,
Alberto

On Wednesday, October 30, 2013 5:30:05 PM UTC+1, Camilo Aguilar wrote:

Nice work Alberto, it is a shame that Brad doesn't maintain nor even
answer to PRs sent to his original project by collaborators.


On Wed, Oct 30, 2013 at 12:23 PM, Alberto García Hierro <
alb...@garciahierro.com> wrote:
Hi,

After porting a site which heavily uses memcache from Python to Go I
noticed a lot of logged errors due to timeouts when communicating with
memcache. After a bit of profiling, I found that the library we were using (
https://github.com/bradfitz/**gomemcache<https://github.com/bradfitz/gomemcache>)
didn't have very good performance, so I decided to fork and improve it, in
order to get better cache response times and lower memory usage.

I've just uploaded the result at https://github.com/rainycape/**
gomemcache <https://github.com/rainycape/gomemcache>. We're not using
it in production yet, but we'll start doing so really soon (probably
tomorrow). Since everyone loves numbers, i'm attaching a performance
comparison between the old implementation and the new one to this email.


Regards,
Alberto

benchmark old ns/op new ns/op delta
BenchmarkSetGet 214443 175154 -18.32%
BenchmarkSetGetLarge 262164 196155 -25.18%
BenchmarkConcurrentSetGetSmall**10_100 82561221 62172865 -24.69%
BenchmarkConcurrentSetGetLarge**10_100 96067285 74113235 -22.85%
BenchmarkConcurrentSetGetSmall**20_100 152834658 116654143 -23.67%
BenchmarkConcurrentSetGetLarge**20_100 202574186 144950678 -28.45%

benchmark old MB/s new MB/s speedup
BenchmarkSetGet 0.03 0.03 1.00x
BenchmarkSetGetLarge 4.82 6.44 1.34x
BenchmarkConcurrentSetGetSmall**10_100 0.07 0.10 1.43x
BenchmarkConcurrentSetGetLarge**10_100 13.16 17.05 1.30x
BenchmarkConcurrentSetGetSmall**20_100 0.08 0.10 1.25x
BenchmarkConcurrentSetGetLarge**20_100 12.48 17.44 1.40x

benchmark old allocs new allocs delta
BenchmarkSetGet 18 6 -66.67%
BenchmarkSetGetLarge 19 6 -68.42%
BenchmarkConcurrentSetGetSmall**10_100 58469 6268 -89.28%
BenchmarkConcurrentSetGetLarge**10_100 59848 6277 -89.51%
BenchmarkConcurrentSetGetSmall**20_100 117177 12663 -89.19%
BenchmarkConcurrentSetGetLarge**20_100 120173 12686 -89.44%

benchmark old bytes new bytes delta
BenchmarkSetGet 2479 170 -93.14%
BenchmarkSetGetLarge 7537 1184 -84.29%
BenchmarkConcurrentSetGetSmall**10_100 3101520 203867 -93.43%
BenchmarkConcurrentSetGetLarge**10_100 8330341 1211143 -85.46%
BenchmarkConcurrentSetGetSmall**20_100 6318072 421952 -93.32%
BenchmarkConcurrentSetGetLarge**20_100 16884200 2437906 -85.56%

--
You received this message because you are subscribed to the Google
Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to golang-nuts...@**googlegroups.com.

For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.


--
*Camilo Aguilar*
Software Engineer
http://github.com/c4milo


--
You received this message because you are subscribed to the Google
Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to golang-nuts...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 10 of 12 | next ›
Discussion Overview
groupgolang-nuts @
categoriesgo
postedOct 30, '13 at 4:24p
activeOct 31, '13 at 9:28p
posts12
users5
websitegolang.org

People

Translate

site design / logo © 2022 Grokbase