lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CA+55aFwuGpYJ944EnZq5-DJ3dLeCu1YA5GncRQueYnDaQPVhog@mail.gmail.com>
Date:	Wed, 1 Feb 2012 15:31:16 -0800
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Arjan van de Ven <arjanvandeven@...il.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Arjan van de Ven <arjan@...radead.org>,
	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
	Milton Miller <miltonm@....com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>
Subject: Re: smp: Start up non-boot CPUs asynchronously

On Wed, Feb 1, 2012 at 3:09 PM, Arjan van de Ven
<arjanvandeven@...il.com> wrote:
>
>
> we spend slightly more than 10 milliseconds on doing the hardware level
> "send ipi, wait for the cpu to get power"  dance. This is mostly just
> hardware physics.
> we spend a bunch of time calibrating loops-per-jiffie/tsc (in 3.3-rc this is
> only done once per socket, but each time we do it, it's several dozen
> milliseconds)
> we spend 20 milliseconds on making sure the tsc is not out of sync with the
> rest of the system (we're looking at optimizing this right now)
>
> a 3.2 kernel spent on average 120 milliseconds per logical non-boot cpu on
> my laptop. 3.3-rc is better (the calibration is now cached for each physical
> cpu), but still dire

Could we drop the cpu hotplug lock earlier?

In particular, maybe we could split it up, and make it something like
the following:

 - keep the existing cpu_hotplug.lock with largely the same semantics

 - add a new *per-cpu* hotplug lock that gets taken fairly early when
the CPU comes up (before calibration), and then we can drop the global
lock. We just need to make sure that the CPU has been added to the
list of CPU's, we don't need for the CPU to have fully initialized
itself.

 - on cpu unplug, we first take the global lock, and then after that
we need to take the local lock of the CPU being brought down - so that
a "down" event cannot succeed before the previous "up" is complete.

Wouldn't something like that largely solve the problem? Sure, maybe
some of the current get_online_cpus() users would need to be taught to
wait for the percpu lock (or completion - maybe that would be better),
but most of them don't really care. They tend to want to just do
something fairly simple with a stable list of CPU's.

I dunno. Maybe it would be more painful than I think it would.

                  Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ