On Tue, Mar 29, 2016 at 1:54 AM, wrote:
Even if you are now resetting every 64KB, that still means that the "s
+= 1 + ((s - lit) >> 4)" skipping can be very aggressive within each
64KB block. Specifically, it is exponential instead of quadratic (see,
for example, the table after "It prints" on the golang/snappy commit
linked to immediately above), which I think is too aggressive.
I have some "mixed content" data, a typical backup set and a virtual disk
image. I do not see any significant compression loss from the aggressive
skipping. IMO the risk is small, since only up to 64KB will be affected. In
my optinion that is a reasonable trade for level 1-3, where 2&3 are set less
We're repeating ourselves, but IMO the risk is non-zero, and the
existing snappy algorithm is plenty fast enough.

I asked the snappy-compression mailing list (where the C++ snappy code
is discussed) for their thoughts. I'll crunch some C++ numbers
tomorrow (it's getting late here in Sydney):


You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

Discussion Posts


Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 15 of 20 | next ›
Discussion Overview
groupgolang-dev @
postedMar 22, '16 at 10:53a
activeApr 10, '16 at 2:06p



site design / logo © 2021 Grokbase