lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120131125232.GD4408@elte.hu>
Date:	Tue, 31 Jan 2012 13:52:32 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Arjan van de Ven <arjan@...radead.org>
Cc:	linux-kernel@...r.kernel.org, Milton Miller <miltonm@....com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	arjanvandeven@...il.com,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>
Subject: Re: smp: Start up non-boot CPUs asynchronously


* Arjan van de Ven <arjan@...radead.org> wrote:

> >From 3700e391ab2841a9f9241e4e31a6281aa59be5f1 Mon Sep 17 00:00:00 2001
> From: Arjan van de Ven <arjan@...ux.intel.com>
> Date: Mon, 30 Jan 2012 20:44:51 -0800
> Subject: [PATCH] smp: Start up non-boot CPUs asynchronously
> 
> The starting of the "not first" CPUs actually takes a lot of 
> boot time of the kernel... upto "minutes" on some of the 
> bigger SGI boxes. Right now, this is a fully sequential 
> operation with the rest of the kernel boot.

Yeah.

> This patch turns this bringup of the other cpus into an 
> asynchronous operation, saving significant kernel boot time 
> (40% on my laptop!!). Basically now CPUs get brought up in 
> parallel to disk enumeration, graphic mode bringup etc etc 
> etc.

Very nice!

> Note that the implementation in this patch still waits for all 
> CPUs to be brought up before starting userspace; I would love 
> to remove that restriction over time (technically that is 
> simple), but that becomes then a change in behavior... I'd 
> like to see more discussion on that being a good idea before I 
> write that patch.

Yeah, it's a good idea to be conservative with that - most of 
the silent assumptions will be on the kernel init side anyway 
and we want to map those out first, without any userspace 
variance mixed in.

I'd expect this patch to eventually break stuff in the kernel - 
we'll fix any kernel bugs that get uncovered, and we can move on 
to make things more parallel once that process has stabilized.

> Second note: We add a small delay between the bring up of 
> cpus, this is needed to actually get a boot time improvement. 
> If we bring up CPUs straight back-to-back, we hog the cpu 
> hotplug lock for write, and that lock is used everywhere 
> during initialization for read. By adding a small delay, we 
> allow those tasks to make progress.

> +void __init async_cpu_up(void *data, async_cookie_t cookie)
> +{
> +	unsigned long nr = (unsigned long) data;
> +	/*
> +	 * we can only up one cpu at a time, due to the hotplug lock;
> +	 * it's better to wait for all earlier CPUs to be done before
> +	 * us so that the bring up order is predictable.
> +	 */
> +	async_synchronize_cookie(cookie);
> +	/*
> +	 * wait a little bit of time between cpus, to allow
> +	 * the kernel boot to not get stuck for a long time
> +	 * on the hotplug lock. We wait longer for the first
> +	 * CPU since many of the early kernel init code is
> +	 * of the hotplug-lock using type.
> +	 */
> +	if (nr < 2)
> +		msleep(100);
> +	else
> +		msleep(5);

Hm, the limits here seem way too ad-hoc and rigid to me.

The bigger worry is that it makes the asynchronity of the boot 
process very timing dependent, 'hiding' a lot of early code on 
faster boxes and only interleaving the execution on slower 
boxes. But slower boxes are harder to debug!

The real fix would be to make the init code depend less on each 
other, i.e. have less hotplug lock dependencies. Or, if it's 
such a hot lock for a good reason, why does spinning on it slow 
down the boot process? It really shouldnt.

So i think this bit is not a good idea. Lets just be fully 
parallel and profile early execution via 'perf kvm' or so and 
figure out where the hotplug lock overhead comes from?

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ