FAQ
Hello,

I've released a native LZO1X implementation in Go:

    https://github.com/rasky/go-lzo

It implements both LZO1X-1 and LZO1X-999. It is a direct translation from
the original C source code, so I'm releasing it under the same license
(GPLv2). I plan to eventually rewrite it from scratch now that I'm familiar
with the algorithms, and release it under MIT.

Giovanni Bajo

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

  • Klaus Post at Oct 20, 2015 at 10:19 am
    Hi!
    On Monday, 19 October 2015 17:57:07 UTC+2, Giovanni Bajo wrote:

    Hello,

    I've released a native LZO1X implementation in Go:

    https://github.com/rasky/go-lzo
    Very nice - lzo1x-1 seems very competitive with the Snappy both in terms of
    speed and compression.

    Matt Mahoneys 10GB corpus: lzo1-1 is 5% faster and 2% better compression
    than Snappy with 64KB blocks.
    enwik9: Snappy is 30% faster, but has 4.5% worse compression. Again 64KB
    blocks.
    random bytes: Incredibly fast. Beats everything by an order of magnitude.
    Of course there is no compression gain.

    It seems that lzo-1 get slower as compression ration increases (to a
    certain point), which is an interesting characteristic. Especially its
    ability to skip uncompressible content is very interesting, and gives it a
    huge edge as a general compressor.

    "1x-999" seems VERY slow. 5-10MB/s. I don't suspect this to be competitive
    with 64KB blocks. You should consider adding some options in-between.

    It would help if you added an io.ReadCloser interface. I assume it would be
    possible for you to design the re-implementation, so it doesn't require all
    content to be in memory.

    I have not tested decompression, but I would expect it to be good as well.

    Great work!

    /Klaus


    Giovanni Bajo
    >

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Klaus Post at Oct 20, 2015 at 11:43 am
    Hi!

    I found the "magic" part: ip += 1 + (ip-ii)>>5

    "ip" is current position and "ii" is the position of the last match. If
    there is no match within 32 bytes it skips an additional byte. Brilliant!

    This can be applied to almost all compression algorithms, so now I have to
    re-benchmark my deflate packages ;)

    There should probably be some reset mechanic, so this doesn't get out of
    hand on extreme block sizes, but it solves the problem of uncompressible
    content taking very long to compress (without any gain) in a very nice way.

    Great stuff!

    /Klaus

    On Tuesday, 20 October 2015 12:19:00 UTC+2, Klaus Post wrote:

    Hi!
    On Monday, 19 October 2015 17:57:07 UTC+2, Giovanni Bajo wrote:

    Hello,

    I've released a native LZO1X implementation in Go:

    https://github.com/rasky/go-lzo
    Very nice - lzo1x-1 seems very competitive with the Snappy both in terms
    of speed and compression.

    Matt Mahoneys 10GB corpus: lzo1-1 is 5% faster and 2% better compression
    than Snappy with 64KB blocks.
    enwik9: Snappy is 30% faster, but has 4.5% worse compression. Again 64KB
    blocks.
    random bytes: Incredibly fast. Beats everything by an order of magnitude.
    Of course there is no compression gain.

    It seems that lzo-1 get slower as compression ration increases (to a
    certain point), which is an interesting characteristic. Especially its
    ability to skip uncompressible content is very interesting, and gives it a
    huge edge as a general compressor.

    "1x-999" seems VERY slow. 5-10MB/s. I don't suspect this to be competitive
    with 64KB blocks. You should consider adding some options in-between.

    It would help if you added an io.ReadCloser interface. I assume it would
    be possible for you to design the re-implementation, so it doesn't require
    all content to be in memory.

    I have not tested decompression, but I would expect it to be good as well.

    Great work!

    /Klaus


    Giovanni Bajo
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Sokolov Yura at Oct 21, 2015 at 6:00 am
    Lz4 uses similar trick too (but with 128 byte).

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Giovanni Bajo at Oct 22, 2015 at 9:04 pm

    On Tuesday, October 20, 2015 at 12:19:00 PM UTC+2, Klaus Post wrote:
    Hi!
    On Monday, 19 October 2015 17:57:07 UTC+2, Giovanni Bajo wrote:

    Hello,

    I've released a native LZO1X implementation in Go:

    https://github.com/rasky/go-lzo
    Very nice - lzo1x-1 seems very competitive with the Snappy both in terms
    of speed and compression.

    Matt Mahoneys 10GB corpus: lzo1-1 is 5% faster and 2% better compression
    than Snappy with 64KB blocks.
    enwik9: Snappy is 30% faster, but has 4.5% worse compression. Again 64KB
    blocks.
    random bytes: Incredibly fast. Beats everything by an order of magnitude.
    Of course there is no compression gain.

    It seems that lzo-1 get slower as compression ration increases (to a
    certain point), which is an interesting characteristic. Especially its
    ability to skip uncompressible content is very interesting, and gives it a
    huge edge as a general compressor.

    "1x-999" seems VERY slow. 5-10MB/s. I don't suspect this to be competitive
    with 64KB blocks. You should consider adding some options in-between.
    I've now exposed the original compression levels for 999. I guess they
    might be interesting for someone.

    I've also published my benchmarks for compressors.

    It would help if you added an io.ReadCloser interface. I assume it would
    be possible for you to design the re-implementation, so it doesn't require
    all content to be in memory.
    Yes, it'll probably require a circular buffer with the max look-behind size.

    I have not tested decompression, but I would expect it to be good as well.
    It should be pretty fast. I've implemented a couple of tricks to be able to
    implement io.Reader without too much overhead, though I'm sure it's
    possible to do even better.

    Giovanni Bajo

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedOct 19, '15 at 3:56p
activeOct 22, '15 at 9:04p
posts5
users3
websitegolang.org

People

Translate

site design / logo © 2021 Grokbase