lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 20 Feb 2009 02:55:53 +0100
From:	Kay Sievers <kay.sievers@...y.org>
To:	Andreas Robinson <andr345@...il.com>
Cc:	Rusty Russell <rusty@...tcorp.com.au>, sam@...nborg.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/6] module, kbuild: Faster boot with custom kernel.

On Fri, Feb 20, 2009 at 01:37, Andreas Robinson <andr345@...il.com> wrote:
> Ok, I've run some tests now and the results are not quite what I expected.
>
> There are 3 cases:
>
> Case 1 used a monolithic kernel with a full set of 95 inked-in
> external modules.
> Case 2 loaded a single mega-module (same set of 95 modules of course)
> while case 3 looped through a list of pathnames and called insmod on
> each one.
>
> The test machine boots the monolithic kernel in 6.27 seconds. This is
> about 0.5 seconds faster than the other two cases, that were roughly
> equal.
>
> The setup:
>
> HP pavillion dv6300 laptop,
> TL-56 Turion 64 X2 CPU @1.8GHz, 5400 RPM 2.5" Seagate SATA drive
>
> Linux 2.6.29-rc5, with the .config derived from the generic Ubuntu 8.10 kernel.
>
> All three cases had a minimal initramfs with busybox, insmod and an
> init script that only inserted modules and then halted. IOW, no root
> fs was mounted.
>
> The results:
>
> 1. Monolithic:  6.27 s, (0.22). bzImg=3419 kB ramfs=515 kB
> 2. Megamodule:  6.80 s, (0.16). bzImg=2297 kB ramfs=1783 kB
> 3. Insmod list: 6.83 s, (0.07). bzImg=2297 kB ramfs=1942 kB
>
> 10 samples were taken in each case. Standard deviations are in parenthesis.
> The measured times are printk timestamps from a dummy module inserted last.
>
> Reading these benchmark results I can only conclude that my work is
> useless and life has no meaning.
>
> So, what's missing or been done wrong here? I expected the difference
> between monolithic and modular to be greater to be honest.

How do you compare the monolithic kernel? What are the 6.27 seconds
here? The time to boot into /sbin/init without any module load? I
wouldn't be surprised if you just wait in the kernel for some
hardware, driven by a built-in module, to init. Do you use current git
and the "fastboot" commandline option?

I don't think the megamodule will help us much, unless we change the
kernel to parallelize something here, or save some other overhead. It
should not be different from the serial insmods, if the data is not
sprinkled over the disk, which an initramfs isn't. And the
fragmentation problem should not be solved with a mega-module. :)

The general problem is that todays distros start like 100 modprobes in
2 seconds, which does not really compare to a sequential insmod test.

I'm pretty sure, for real-world numbers we need to load them in
parallel, and look at the kernel side, if we can minimize the locked
code area, so we can do as much as possible in parallel.

Only if that is addressed, I think, we can get useful numbers how much
time we really spend in in-kernel linking and can start evaluating if
pre-linking modules would make sense, or not.

Thanks,
Kay
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ