FAQ
Anyone know of a golang implementation of brotli?

https://github.com/google/brotli/

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

  • Klaus Post at Oct 10, 2015 at 9:23 am
    Hi!
    On Friday, 9 October 2015 16:38:27 UTC+2, Marco Peereboom wrote:

    Anyone know of a golang implementation of brotli?

    Considering it has just been released, I wouldn't expect so. Maybe someone
    has wrapped the C-code.

    I might do a port at some point, but I will wait a little for the C code to
    stabilize.

    It seems like an ok alternative to gzip for small sizes, but it is not like
    it blows it out of the water. If you are able to pre-compress the content
    it seems quite good, since the best compression modes are too slow to be
    considered for real-time compression at the moment.

    It is a pity they already defined the dictionary. Looking at it, it seems
    like it is just slapped out there as "good enough". If someone spent a
    month collecting a lot of data (ie monitor net traffic), and analyzed it, I
    think that compression for HTTP could have been a few percent better. This
    seems like a lot of concatenated strings with a little html and js in
    there. If it was machine generated most word would probably be separated by
    spaces for longer matches. Seems like a missed opportunity.

    I dumped a text version of the dictionary - 80 character wrapped
    here: https://gist.github.com/klauspost/2900d5ba6f9b65d69c8e



    /Klaus

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Axel Wagner at Oct 10, 2015 at 10:10 am

    Klaus Post writes:
    If someone spent a
    month collecting a lot of data (ie monitor net traffic), and analyzed it, I
    think that compression for HTTP could have been a few percent better.
    Isn't this by Google? Don't you think they did that? And are able to
    collect a far more representative sample and far more data than most
    people or organisations?

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Klaus Post at Oct 10, 2015 at 11:41 am

    On Saturday, 10 October 2015 12:10:44 UTC+2, Axel Wagner wrote:
    Isn't this by Google? Don't you think they did that? And are able to
    collect a far more representative sample and far more data than most
    people or organisations?
    Maybe they have, but not having openness about the process always opens for
    questions.

    /Klaus

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Youtube at Oct 11, 2015 at 8:14 am

    On Saturday, October 10, 2015 at 5:41:14 AM UTC-6, Klaus Post wrote:
    On Saturday, 10 October 2015 12:10:44 UTC+2, Axel Wagner wrote:

    Isn't this by Google? Don't you think they did that? And are able to
    collect a far more representative sample and far more data than most
    people or organisations?
    Maybe they have, but not having openness about the process always opens
    for questions.

    /Klaus
    Well, if your competitors have all the source code to every one of your
    secrets, you can't exactly run a corporation, that makes a profit. If any
    search engine out there had access to all googles' source code, well,
    then.... pretty much google doesn't have any way of making money. Open
    software is great, except in the real world there is money involved, and
    keeping some secrets keeps you ahead of competitors. In fact I'm surprised
    at how much code Google has released, it's like a FOR PROFIT corporation
    that is also a charity, but no one ever gives credit to google being a
    charity since all the big brother is watching you slurs make it sound like
    an evil corp, which overall it is not.

    Regards,
    Z505
    GNG is Not GNU
    Critic of free software utopian ideological nuts
    But contribute to open software myself.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Marco Peereboom at Oct 10, 2015 at 1:27 pm

    On Sat, Oct 10, 2015 at 02:23:45AM -0700, Klaus Post wrote:
    Hi!
    On Friday, 9 October 2015 16:38:27 UTC+2, Marco Peereboom wrote:

    Anyone know of a golang implementation of brotli?A

    Considering it has just been released, I wouldn't expect so. Maybe someone
    has wrapped the C-code.
    I might do a port at some point, but I will wait a little for the C code
    to stabilize.
    It seems like an ok alternative to gzip for small sizes, but it is not
    like it blows it out of the water. If you are able to pre-compress the
    content it seems quite good, since the best compression modes are too slow
    to be considered for real-time compression at the moment.
    It is a pity they already defined the dictionary. Looking at it, it seems
    like it is just slapped out there as "good enough". If someone spent a
    month collecting a lot of data (ie monitor net traffic), and analyzed it,
    I think that compression for HTTP could have been a few percent better.
    This seems like a lot of concatenated strings with a little html and js in
    there. If it was machine generated most word would probably be separated
    by spaces for longer matches. Seems like a missed opportunity.
    I dumped a text version of the dictionary - 80 character wrapped
    here:A https://gist.github.com/klauspost/2900d5ba6f9b65d69c8e
    /Klaus
    I asked because all other languages have an implementation ;)

    Reason I actually am interested is because it sounded like LZMA
    compression at LZO speed from the blog. I am actively working on some
    code that needs compression and speed hence it caught my eye.

    Glad to hear you are somewhat excited about it. Love your zip stuff.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Klaus Post at Oct 10, 2015 at 2:32 pm

    On Saturday, 10 October 2015 15:27:38 UTC+2, Marco Peereboom wrote:

    I asked because all other languages have an implementation ;)
    I think "all the other languages" are merely wrapping the C code.

    Reason I actually am interested is because it sounded like LZMA
    compression at LZO speed from the blog. I am actively working on some
    code that needs compression and speed hence it caught my eye.
    Yes - it sounds too good to be true - and it is for general data. brotli is
    at least an order of magnitude slower than LZO.

    I found this nice interactive benchmark:
    https://quixdb.github.io/squash-benchmark/

    In general, brotli is an ok replacement for gzip for text content.
    Compression time is about the same as deflate, but with an improved
    compression ratio.

    The brotli level 10+11 compresses well, but is *very* slow. But for static
    web content, that is exactly what you want. However, for most types of
    content there are better or faster compressors.

    What type of content are you looking to compress?


    Glad to hear you are somewhat excited about it. Love your zip stuff.
    I am really glad the we will have something that is better than gzip for
    web content when it is implemented in browsers.

    /Klaus

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Michael Jones at Oct 10, 2015 at 2:56 pm
    http://www.gstatic.com/b/brotlidocs/brotli-2015-09-22.pdf <http://www.gstatic.com/b/brotlidocs/brotli-2015-09-22.pdf>

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Mike Houston at Oct 28, 2015 at 12:10 am
    I have put a cgo wrapper around the CompressBuffer/DecompressBuffer
    functions:
    https://github.com/kothar/brotli-go

    I have not wrapped the stream handling functions yet.

    I've added the round-trip compression tests from the brotli repository, and
    all seems to work, but since my cgo experience was previously nil there may
    be some lurking issues. In particular I'm not sure how much buffer to
    allocate in some cases, so I have been a bit generous.

    With cgo, is it appropriate to allocate a buffer in Go and pass it to the C
    decompression function to put its output in, or should I be allocating the
    buffer on the C-side and copying it to Go using GoBytes
    or reflect.SliceHeader? Or does it matter? Will the GC be blocked during
    the call into a C function?

    Thoughts welcome.

    Mike.
    On Saturday, October 10, 2015 at 10:23:46 AM UTC+1, Klaus Post wrote:

    Hi!
    On Friday, 9 October 2015 16:38:27 UTC+2, Marco Peereboom wrote:

    Anyone know of a golang implementation of brotli?

    Considering it has just been released, I wouldn't expect so. Maybe someone
    has wrapped the C-code.

    <snip>
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Ian Lance Taylor at Oct 28, 2015 at 12:20 am

    On Tue, Oct 27, 2015 at 5:01 PM, Mike Houston wrote:
    With cgo, is it appropriate to allocate a buffer in Go and pass it to the C
    decompression function to put its output in, or should I be allocating the
    buffer on the C-side and copying it to Go using GoBytes or
    reflect.SliceHeader? Or does it matter? Will the GC be blocked during the
    call into a C function?
    If we adopt the rules described in https://golang.org/issue/12416,
    then it is fine to allocate a []byte in Go, pass the address of
    element 0 of the slice to C, and let C code fill in the bytes.

    The GC is not blocked during a call to a C function.

    Never use reflect.SliceHeader. It was a mistake to make it an
    exported type. As the docs say, it can not be used safely or
    portably.

    Ian

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Thebrokentoaster at Oct 28, 2015 at 6:09 am
    I finished a Brotli decoder in pure Go, you can find the package here:

        - https://github.com/dsnet/compress

    After implementing this, I feel like Brotli is at least 2-3x more
    complicated than DEFLATE. At least I could keep the entire specification
    for DEFLATE in my head... I couldn't do it for Brotli. Currently, the
    library passes on all the test files that the official Brotli repo
    contains. The next steps for the package will be:

        1. Writing more unit tests to ensure correctness of the decoder. I want
        to hit as many edge conditions as possible and places where I was not sure
        what the specification was exactly dictating.
        2. Improve performance of the decoder.
        3. Start work on an encoder.

    Enjoy,
    JT
    On Tuesday, October 27, 2015 at 5:20:33 PM UTC-7, Ian Lance Taylor wrote:

    On Tue, Oct 27, 2015 at 5:01 PM, Mike Houston <schm...@gmail.com
    <javascript:>> wrote:
    With cgo, is it appropriate to allocate a buffer in Go and pass it to the C
    decompression function to put its output in, or should I be allocating the
    buffer on the C-side and copying it to Go using GoBytes or
    reflect.SliceHeader? Or does it matter? Will the GC be blocked during the
    call into a C function?
    If we adopt the rules described in https://golang.org/issue/12416,
    then it is fine to allocate a []byte in Go, pass the address of
    element 0 of the slice to C, and let C code fill in the bytes.

    The GC is not blocked during a call to a C function.

    Never use reflect.SliceHeader. It was a mistake to make it an
    exported type. As the docs say, it can not be used safely or
    portably.

    Ian
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Mike Houston at Oct 28, 2015 at 12:23 pm
    Excellent, in which case I won't attempt to wrap the stream interface, as
    this seems like the better long-term solution.

    Thanks for the clarification on passing pointers Ian, that is much clearer
    to me now.

    Mike.

    On Wednesday, October 28, 2015 at 6:09:49 AM UTC, thebroke...@gmail.com
    wrote:
    I finished a Brotli decoder in pure Go, you can find the package here:

    - https://github.com/dsnet/compress

    After implementing this, I feel like Brotli is at least 2-3x more
    complicated than DEFLATE. At least I could keep the entire specification
    for DEFLATE in my head... I couldn't do it for Brotli. Currently, the
    library passes on all the test files that the official Brotli repo
    contains. The next steps for the package will be:

    1. Writing more unit tests to ensure correctness of the decoder. I
    want to hit as many edge conditions as possible and places where I was not
    sure what the specification was exactly dictating.
    2. Improve performance of the decoder.
    3. Start work on an encoder.

    Enjoy,
    JT
    On Tuesday, October 27, 2015 at 5:20:33 PM UTC-7, Ian Lance Taylor wrote:
    On Tue, Oct 27, 2015 at 5:01 PM, Mike Houston wrote:

    With cgo, is it appropriate to allocate a buffer in Go and pass it to the C
    decompression function to put its output in, or should I be allocating the
    buffer on the C-side and copying it to Go using GoBytes or
    reflect.SliceHeader? Or does it matter? Will the GC be blocked during the
    call into a C function?
    If we adopt the rules described in https://golang.org/issue/12416,
    then it is fine to allocate a []byte in Go, pass the address of
    element 0 of the slice to C, and let C code fill in the bytes.

    The GC is not blocked during a call to a C function.

    Never use reflect.SliceHeader. It was a mistake to make it an
    exported type. As the docs say, it can not be used safely or
    portably.

    Ian
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Thebrokentoaster at Oct 29, 2015 at 4:08 am
    It may still be nice to have a stream interface on the writer. I don't
    anticipate getting the encoder logic done anytime soon.

    The decoder logic is about 4000L of C, while the encoder logic is about
    6200L of C++.
    My decoder logic ended up being about 1500L of Go, who knows how many lines
    the encoder will end up being. Furthermore, I'll probably need to spend
    some time learning the encoder does its magic.

    JT
    On Wednesday, October 28, 2015 at 5:23:57 AM UTC-7, Mike Houston wrote:

    Excellent, in which case I won't attempt to wrap the stream interface, as
    this seems like the better long-term solution.

    Thanks for the clarification on passing pointers Ian, that is much clearer
    to me now.

    Mike.

    On Wednesday, October 28, 2015 at 6:09:49 AM UTC, thebroke...@gmail.com
    wrote:
    I finished a Brotli decoder in pure Go, you can find the package here:

    - https://github.com/dsnet/compress

    After implementing this, I feel like Brotli is at least 2-3x more
    complicated than DEFLATE. At least I could keep the entire specification
    for DEFLATE in my head... I couldn't do it for Brotli. Currently, the
    library passes on all the test files that the official Brotli repo
    contains. The next steps for the package will be:

    1. Writing more unit tests to ensure correctness of the decoder. I
    want to hit as many edge conditions as possible and places where I was not
    sure what the specification was exactly dictating.
    2. Improve performance of the decoder.
    3. Start work on an encoder.

    Enjoy,
    JT
    On Tuesday, October 27, 2015 at 5:20:33 PM UTC-7, Ian Lance Taylor wrote:

    On Tue, Oct 27, 2015 at 5:01 PM, Mike Houston <schm...@gmail.com>
    wrote:
    With cgo, is it appropriate to allocate a buffer in Go and pass it to the C
    decompression function to put its output in, or should I be allocating the
    buffer on the C-side and copying it to Go using GoBytes or
    reflect.SliceHeader? Or does it matter? Will the GC be blocked during the
    call into a C function?
    If we adopt the rules described in https://golang.org/issue/12416,
    then it is fine to allocate a []byte in Go, pass the address of
    element 0 of the slice to C, and let C code fill in the bytes.

    The GC is not blocked during a call to a C function.

    Never use reflect.SliceHeader. It was a mistake to make it an
    exported type. As the docs say, it can not be used safely or
    portably.

    Ian
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Damian Gryski at Oct 11, 2015 at 8:54 am
    The decoder is pure C and looks much simpler to translate than the compressor.

    Damian

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Klaus Post at Oct 11, 2015 at 1:52 pm

    On Sunday, 11 October 2015 10:54:26 UTC+2, Damian Gryski wrote:
    The decoder is pure C and looks much simpler to translate than the
    compressor.

    Yes, but it is still 2k LOC, so it's not done in a day or two ;)

    A compressor is really the most useful for Go, since it is aimed at web
    servers. Of course you could deploy behind a reverse proxy and let it
    handle it. It will take some time for browser support to trickle out to end
    users, so luckily we have a bit of time.

    Also, with a bit of time, we can also get an impression of how "safe" it is
    to use without SSL. Previous experience from trying other compression
    formats show problems could occur with proxies that can only handle
    deflate/gzip on plain HTTP.

    Damian
    /Klaus

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Thebrokentoaster at Oct 12, 2015 at 8:36 am

    Reason I actually am interested is because it sounded like
    LZMA compression at LZO speed from the blog. I am actively working on
    some code that needs compression and speed hence it caught my eye.
    One innovation that Brotli uses is a static dictionary of 122k generated
    based on web content of various languages and formats (like CSS, JS, etc).
    This does, however, mean that Brotli is not technically a "general purpose"
    algorithm since it assumes that the data it is compressing is largely
    text-based. It's a wonderful idea for web content, but you should try
    building the open source program yourself and benchmarking the speeds and
    compression ratio *for your datasets*. My own tests (compressing the source
    code of Linux kernel) have found that the compression ratio is better than
    DEFLATE and LZMA at a similar compression speed. However, for maximum
    compressibility, LZMA still beats Brotli. Even if your dataset cant take
    advantage of the static dictionary, it may still perform better (ratio
    wise) than DEFLATE since Brotli allows for sliding windows larger than
    32KiB (Brotli allows for windows up to 16MiB in latest RFC draft).

    I found this nice interactive benchmark:
    https://quixdb.github.io/squash-benchmark/

    <https://www.google.com/url?q=https%3A%2F%2Fquixdb.github.io%2Fsquash-benchmark%2F&sa=D&sntz=1&usg=AFQjCNFrJyJiXHnGqe3-WQEpaEwvOQwEXA>In
    general, brotli is an ok replacement for gzip for text content. Compression
    time is about the same as deflate, but with an improved compression ratio.
    Interesting suite of benchmarks. I'm actually surprised to see that DEFLATE
    (or zlib) is pretty close to the Pareto frontier since I thought I had read
    in the past somewhere that DEFLATE was now far from it. Personally, I have
    always felt that DEFLATE did strike a surprisingly good balance between
    speed and ratio for *generic input datasets *for a format designed in the
    early 1990s.

    Some random thoughts about using Brotli:

        - It may or may not be worth implementing in pure Go just yet since the
        RFC is still a draft and things are still changing. A CGo/SWIG wrapper may
        be the better idea for the time being.
        - I am a little concerned about the use of a static dictionary.
        Languages change over time and formats die and are born, which may reduce
        the effectiveness of this dictionary.
        - The 122Ki static dictionary obviously needs to be compiled into the
        binary. This may be expensive for embedded systems.
        - (not-big-deal) The current Go compiler is currently relatively
        inefficient about compiling large static byte slices. A Go file with
        just the dictionary alone takes 1s to compile on my machine.

    On Sunday, October 11, 2015 at 6:52:36 AM UTC-7, Klaus Post wrote:
    On Sunday, 11 October 2015 10:54:26 UTC+2, Damian Gryski wrote:

    The decoder is pure C and looks much simpler to translate than the
    compressor.

    Yes, but it is still 2k LOC, so it's not done in a day or two ;)

    A compressor is really the most useful for Go, since it is aimed at web
    servers. Of course you could deploy behind a reverse proxy and let it
    handle it. It will take some time for browser support to trickle out to end
    users, so luckily we have a bit of time.

    Also, with a bit of time, we can also get an impression of how "safe" it
    is to use without SSL. Previous experience from trying other compression
    formats show problems could occur with proxies that can only handle
    deflate/gzip on plain HTTP.

    Damian
    /Klaus
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Klaus Post at Oct 12, 2015 at 9:30 am

    On Monday, 12 October 2015 10:36:32 UTC+2, thebroke...@gmail.com wrote:
    Interesting suite of benchmarks. I'm actually surprised to see that
    DEFLATE (or zlib) is pretty close to the Pareto frontier since I thought I
    had read in the past somewhere that DEFLATE was now far from it.
    Personally, I have always felt that DEFLATE did strike a surprisingly good
    balance between speed and ratio for *generic input datasets *for a
    format designed in the early 1990s.
    Yes - it seems to keep relevant, since it has always had a good tradeoff
    between speed and acceptable compression. It was never the absolute best of
    either, but the flexibility and ability to scale has suited it well.

    Some random thoughts about using Brotli:

    - It may or may not be worth implementing in pure Go just yet since
    the RFC is still a draft and things are still changing. A CGo/SWIG wrapper
    may be the better idea for the time being.
    Agree, even though that will keep it from production use for some people,
    it will give a good base for testing.

    - I am a little concerned about the use of a static dictionary.
    Languages change over time and formats die and are born, which may reduce
    the effectiveness of this dictionary.

    Well, it is kind of the "it doesn't hurt" things. If deflate had an
    initial dictionary, it would improve compression for some cases, but it
    would never hurt compression. Kind of the same thing with brotli.

    - The 122Ki static dictionary obviously needs to be compiled into the
    binary. This may be expensive for embedded systems.

    I guess, if it is a package then you could choose to not include it on
    some builds via build tags.

    There is of course also all the transformed dictionaries, which will eat up
    some megabytes of RAM along with the hash table for all entries. The
    C-implementation has (some sort of) hashes of all transformed entries, as
    well as all valid transformation words as static tables. I guess it is a
    question of trading 0.5s startup time with 0.5s compile time and a bigger
    binary. I guess tests are needed.

    - (not-big-deal) The current Go compiler is currently relatively
    inefficient about compiling large static byte slices. A Go file with
    just the dictionary alone takes 1s to compile on my machine.
    My guess is, that the compiler would be able to cache the object files with
    a "go install". You could maybe even have the dictionaries in a separate
    package for that reason.

    /Klaus

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Andy Balholm at Oct 12, 2015 at 5:04 pm
    Also, with a bit of time, we can also get an impression of how "safe" it is to use without SSL. Previous experience from trying other compression formats show problems could occur with proxies that can only handle deflate/gzip on plain HTTP.
    Don’t forget that there are proxies that decrypt SSL. But hopefully a proxy smart enough to do that is also smart enough to modify the Accept-Encoding header on requests.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedOct 9, '15 at 2:38p
activeOct 29, '15 at 4:08a
posts18
users11
websitegolang.org

People

Translate

site design / logo © 2021 Grokbase