lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ac9c93b10809150546w7644316bk53122b0ce5021ff4@mail.gmail.com>
Date:	Mon, 15 Sep 2008 14:46:41 +0200
From:	"Frans Meulenbroeks" <fransmeulenbroeks@...il.com>
To:	"Rob Landley" <rob@...dley.net>
Cc:	"Willy Tarreau" <w@....eu>, "Alain Knaff" <alain@...ff.lu>,
	torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] init: bzip2 or lzma -compressed kernels and initrds

2008/9/15 Rob Landley <rob@...dley.net>:
> On Sunday 07 September 2008 00:48:31 Willy Tarreau wrote:
>> Hi Alain,
>> > +config KERNEL_LZMA
>> > +       bool "LZMA"
>> > +       help
>> > +         The most recent compression algorithm.
>> > +    Its ratio is best, decompression speed is between the other
>> > +    2. Compression is slowest.
>> > +    The kernel size is about 33 per cent smaller with lzma,
>> > +    in comparison to gzip.
>>
>> isn't memory usage in the same range as bzip2 ?
>
> Last I checked it was more.  (I very vaguely recall somebody saying 16 megs
> working space back when this was first submitted to busybox, but that was a
> few years ago...)
>
> A quick Google found a page that benchmarks them.  Apparently it depends
> heavily on which compression option you use:
>
> http://tukaani.org/lzma/benchmarks
>

[...]

Apologies if I'm sidetracking the discussion, but I'd like to coin a remark.

For kernel/ramfsimage etc the best choice is the one that has the
fastest decompression (info on tukaani.org says gzip).
Rationale: as it uncompresses faster the system will boot faster.

Of course this only holds if the background memory can hold that
image. For disk based systems, I assume this is not a problem at all,
but for embedded systems with all software in flash a higher
compression ration (e.g. lzma) can just make the difference between
fit and not fit (so in those cases lzma could just make your day).

Side note: although I think the conclusion at the tukaani website
holds, the data themselves are questionable.
I guess this is done on the internal hard disk of the laptop (this is
not specified). It would be better to do this on a ramfs to avoid
effects from data still being in the buffer cache (or not yet it).

Also the actual time in the tests is spent on three things: read from
disk, decompress, write to disk. (i'll only talk about decompress
here, guess an additional second or so to compress is not that
important).
You can argue that the latter is a constant as the same amount of data
is written, but the first one (the read time) depends on the actual
amount of data and the transfer rate of the device.
In case of slower devices it could well be that higher compression
yields a smaller image. If the reduction in read time is bigger than
the additional cost for the slower decompress, the net effect still
would be a win when it comes to boot time.

and finally: I've seen substantial timing differences when comparing
algorithms on different architectures (arm/mips/x86), so processor
might also make a difference on what is best. (and so will the
compiler).

FM
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ