[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120816175234.GL11413@one.firstfloor.org>
Date: Thu, 16 Aug 2012 19:52:34 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Roman Mamedov <rm@...anrm.ru>
Cc: Johannes Stezenbach <js@...21.net>,
"Markus F.X.J. Oberhumer" <markus@...rhumer.com>,
linux-kernel@...r.kernel.org, Andi Kleen <andi@...stfloor.org>,
chris.mason@...ionio.com, linux-btrfs@...r.kernel.org,
Nitin Gupta <ngupta@...are.org>,
Richard Purdie <rpurdie@...nedhand.com>,
richard -rw- weinberger <richard.weinberger@...il.com>,
linux-arm-kernel@...ts.infradead.org
Subject: Re: [GIT PULL] Update LZO compression
> I have locked the Allwinner A10 CPU in my Mele A2000 to 60 MHz using cpufreq-set,
> and ran your test. rnd.lzo is a 9 MB file from /dev/urandom compressed with lzo.
> There doesn't seem to be a significant difference between all three variants.
I found that in compression benchmarks it depends a lot on the data
compressed.
urandom (which should be essentially incompressible) will be handled
by different code paths in the compressor than other more compressible data.
It becomes a complicated memcpy then.
But then there are IO benchmarks which also only do zeroes, which
also gives an unrealistic picture.
Usually it's best to use some corpus with different data types, from very
compressible to less so; and look at the aggregate.
For my snappy work I usually had at least large executables (medium) and some
pdfs (already compressed; low) and then uncompressed source code tars (high
compression)
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists