lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 19 Feb 2009 16:06:19 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Nick Piggin <npiggin@...e.de>
Cc:	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Jens Axboe <jens.axboe@...cle.com>,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Steven Rostedt <rostedt@...dmis.org>,
	linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org
Subject: Re: Q: smp.c && barriers (Was: [PATCH 1/4] generic-smp: remove
	single ipi fallback for smp_call_function_many())


* Nick Piggin <npiggin@...e.de> wrote:

> On Thu, Feb 19, 2009 at 05:47:20PM +1100, Benjamin Herrenschmidt wrote:
> > 
> > > It might hide some architecture-specific implementation issue, of course, 
> > > so random amounts of "smp_mb()"s sprinkled around might well make some 
> > > architecture "work", but it's in no way guaranteed. A smp_mb() does not 
> > > guarantee that some separate IPI network is ordered - that may well take 
> > > some random machine-specific IO cycle.
> > > 
> > > That said, at least on x86, taking an interrupt should be a serializing 
> > > event, so there should be no reason for anything on the receiving side. 
> > > The _sending_ side might need to make sure that there is serialization 
> > > when generating the IPI (so that the IPI cannot happen while the writes 
> > > are still in some per-CPU write buffer and haven't become part of the 
> > > cache coherency domain).
> > > 
> > > And at least on x86 it's actually pretty hard to generate out-of-order 
> > > accesses to begin with (_regardless_ of any issues external to the CPU). 
> > > You have to work at it, and use a WC memory area, and I'm pretty sure we 
> > > use UC for the apic accesses.
> > 
> > On powerpc, I suspect an smp_mb() on the sender would be 
> > useful... it mostly depends how the IPI is generated but in 
> > most case it's going to be an MMIO write, ie non-cached 
> > write which isn't ordered vs. any previous cached store 
> > except using a full sync (which is what smp_mb() does).
> 
> So your arch_send_call_function_single_ipi etc need to ensure 
> this, right?  Generic code obviously has no idea about how to 
> do it.

The thing is, the most widespread way to send IPIs (x86 
non-x2apic local APIC) does not need any barriers in the generic 
code or elsewhere, because the local APIC is mapped uncached so 
it's implicitly ordered.

So the right solution is to add barriers to those IPI 
implementations that need it. This means that the generic code 
should not have a barrier for IPI sending.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ