lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160907132354.GR10138@twins.programming.kicks-ass.net>
Date:   Wed, 7 Sep 2016 15:23:54 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Nicholas Piggin <npiggin@...il.com>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Will Deacon <will.deacon@....com>,
        Oleg Nesterov <oleg@...hat.com>,
        Paul McKenney <paulmck@...ux.vnet.ibm.com>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...nel.org>,
        Alan Stern <stern@...land.harvard.edu>
Subject: Re: Question on smp_mb__before_spinlock

On Wed, Sep 07, 2016 at 10:17:26PM +1000, Nicholas Piggin wrote:
> >  /*
> > + * This barrier must provide two things:
> > + *
> > + *   - it must guarantee a STORE before the spin_lock() is ordered against a
> > + *     LOAD after it, see the comments at its two usage sites.
> > + *
> > + *   - it must ensure the critical section is RCsc.
> > + *
> > + * The latter is important for cases where we observe values written by other
> > + * CPUs in spin-loops, without barriers, while being subject to scheduling.
> > + *
> > + * CPU0			CPU1			CPU2
> > + * 
> > + * 			for (;;) {
> > + * 			  if (READ_ONCE(X))
> > + * 			  	break;
> > + * 			}
> > + * X=1
> > + * 			<sched-out>
> > + * 						<sched-in>
> > + * 						r = X;
> > + *
> > + * without transitivity it could be that CPU1 observes X!=0 breaks the loop,
> > + * we get migrated and CPU2 sees X==0.
> > + *
> > + * Since most load-store architectures implement ACQUIRE with an smp_mb() after
> > + * the LL/SC loop, they need no further barriers. Similarly all our TSO
> > + * architectures imlpy an smp_mb() for each atomic instruction and equally don't
> > + * need more.
> > + *
> > + * Architectures that can implement ACQUIRE better need to take care.
> >   */
> > +#ifndef smp_mb__after_spinlock
> > +#define smp_mb__after_spinlock()	do { } while (0)
> >  #endif
> 
> It seems okay, but why not make it a special sched-only function name
> to prevent it being used in generic code?
> 
> I would not mind seeing responsibility for the switch barrier moved to
> generic context switch code like this (alternative for powerpc reducing
> number of hwsync instructions was to add documentation and warnings about
> the barriers in arch dependent and independent code). And pairing it with
> a spinlock is reasonable.
> 
> It may not strictly be an "smp_" style of barrier if MMIO accesses are to
> be ordered here too, despite critical section may only be providing
> acquire/release for cacheable memory, so maybe it's slightly more
> complicated than just cacheable RCsc?

Interesting idea..

So I'm not a fan of that raw_spin_lock wrapper, since that would end up
with a lot more boiler-plate code than just the one extra barrier.

But moving MMIO/DMA/TLB etc.. barriers into this spinlock might not be a
good idea, since those are typically fairly heavy barriers, and its
quite common to call schedule() without ending up in switch_to().

For PowerPC it works out, since there's only SYNC, no other option
afaik.

But ARM/ARM64 will have to do DSB(ISH) instead of DMB(ISH). IA64 would
need to issue "sync.i" and mips-octeon "synciobdma".

Will, any idea of the extra cost involved in DSB vs DMB?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ