lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1236004353.10055.49.camel@andreas-laptop>
Date:	Mon, 02 Mar 2009 15:32:33 +0100
From:	Andreas Robinson <andr345@...il.com>
To:	Kay Sievers <kay.sievers@...y.org>
Cc:	Rusty Russell <rusty@...tcorp.com.au>, sam@...nborg.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/6] module, kbuild: Faster boot with custom kernel.

I have finished testing parallel module loading. It looks like a small
kernel with a minimal initramfs and running several instances of insmod
or modprobe in parallel has the best complexity to performance ratio.

Testing shows that a megamodule is slightly slower than parallel
insmods, so it's not really an option anymore.

A monolithic kernel with parallelized initcalls is better - about 200 ms
faster than parallel insmods on my test system. However, it comes with a
fairly large set of changes:

* First, you need a 200-line patch in init/main.c (do_initcalls() and
friends)

* Then the built-in module dependencies must be calculated properly, eg
with a modified depmod, and added to the build process.

* Finally "soft" dependencies, i.e dependencies that are not implied by
symbol use, have to be formalized and moved into the kernel somehow.
Right now they're only defined in "install" commands in modprobe.conf.

So, what do you think, should I keep going? IMHO, the slower userspace
implementation is acceptable since it's so much simpler.

Thanks,
Andreas

------------------------

Here are the test results:

Setup:

HP Pavillion DV6300 laptop. AMD Turion TL-56 @ 1.8GHz CPU. In a
benchmark at Tomshardware, the CPU have scores similar to an Intel core
duo T2300.
http://www.tomshardware.com/reviews/dual-core-notebook-cpus-explored,1553-11.html

Results:

   Configuration                    |   T    | stddev (n = 10)
------------------------------------+--------+--------
[1] Serial monolithic kernel        | 3.08 s |  0.08
[2] Megamodule, serial initcalls    | 3.26 s |  0.05
[3] Megamodule, parallel initcalls  | 2.27 s |  0.01
[4] Parallel insmods w/ mutex patch | 2.20 s |  0.01 <- best choice
[5] Parallel monolithic kernel      | 2.02 s |       <- estimate only

T = Time from kernel startup until all module initcalls have
    executed, i.e when init can mount the root filesystem.

[1] Monolithic 2.6.29-rc5 with fastboot cmdline option. No initramfs.

[2] 94 modules linked into one megamodule. The megamodule executed
    module initcalls sequentially. All files were on a minimal
    initramfs. The kernel had fastboot enabled.

[3] 94 modules inserted with custom insmod, in parallel.
    (Dependencies were accounted for.) Minimal initramfs, fastboot.

[4] Like [2], but initcalls ran in parallel.
    (Dependencies were accounted for.) Minimal initramfs, fastboot.
    Rusty's module loader mutex-patch is applied.

[5] This is an estimation based on how much faster [3] would be if
    load_module() took no time at all. T5 = T3 - (T2 - T1).
    That is, I assume T2-T1 is the time spent in load_module().

Note:

By minimal initramfs, I mean one such that it, plus small kernel, is
roughly the same size as the equivalent monolithic kernel. The only
executable on it is init, written in C. There are no shell scripts,
busybox, progress bars or unused modules.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ