lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140228122624.GF9987@twins.programming.kicks-ass.net>
Date:	Fri, 28 Feb 2014 13:26:24 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...nel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Tony Luck <tony.luck@...el.com>,
	Robert Richter <rric@...nel.org>,
	Bjorn Helgaas <bhelgaas@...gle.com>,
	Aaro Koskinen <aaro.koskinen@....fi>,
	David Daney <david.daney@...ium.com>,
	linux-kernel@...r.kernel.org
Subject: Re: smp_call_function_single with wait=0 considered harmful

On Wed, Dec 04, 2013 at 08:46:27AM -0800, Christoph Hellwig wrote:
> While doing my recent work on the generic smp function calls I noticed
> that smp_call_function_single without the wait flag can't work, as
> it allocates struct call_single_data on stack, and without the wait
> flag will happily return before the IPI has been executed.

It doesn't actually; it uses a per-cpu one in the !wait case.

The subsequent csd_lock() ensures it will wait for any prior user to
complete, so only if you're doing multiple smp_call_function_single()
invocations back-to-back will they queue up.

> This affects the following callers:

<snip>

>   kernel/stop_machine.c:stop_two_cpus()

That site should work with .wait=1 just fine, but given the above, the
.wait=0 doesn't appear problematic at all.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ