[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20061004155747.GA4096@wohnheim.fh-wedel.de>
Date: Wed, 4 Oct 2006 17:57:47 +0200
From: Jörn Engel <joern@...nheim.fh-wedel.de>
To: Phillip Susi <psusi@....rr.com>
Cc: Jan Engelhardt <jengelh@...ux01.gwdg.de>, Willy Tarreau <w@....eu>,
Drew Scott Daniels <ddaniels@...lumni.mb.ca>,
linux-kernel@...r.kernel.org
Subject: Compressing pages [was: Re: Smaller compressed kernel source tarballs?]
On Tue, 3 October 2006 14:24:01 -0400, Phillip Susi wrote:
> Jan Engelhardt wrote:
> >There are lots of obscure compression formats that achieve somewhat
> >better compression at the cost of MUCH more time (neglecting they are
> >not too open), such as MS CAB and ACE.
>
> CAB is an archive container format, not a compression algorithm. Last
> time I worked on some code to handle it, they used the standard LZW
> algorithm implemented by gzip ( but had the ability to support others in
> the future ) and could only compress 32kb blocks. The small block size
> led to poor compression.
Actually, compression in 4KiB blocks is a _very_ interesting
benchmark. Jffs2 works with that size for compression and other
compressed filesystems likely do the same, although possibly with
something larger like 64KiB.
And the results are completely different in that benchmark. Gzip
actually beats bzip2 hands-down on compression ratio, for example.
I used to have a script, but cannot find it anymore. Basically
something like:
while (read next 4KiB from input file) {
compress chunk
add compressed_size to total
}
print total
Jörn
--
Unless something dramatically changes, by 2015 we'll be largely
wondering what all the fuss surrounding Linux was really about.
-- Rob Enderle
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists