[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160912122708.71a91ea3@roar.ozlabs.ibm.com>
Date: Mon, 12 Sep 2016 12:27:08 +1000
From: Nicholas Piggin <npiggin@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Will Deacon <will.deacon@....com>,
Oleg Nesterov <oleg@...hat.com>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Michael Ellerman <mpe@...erman.id.au>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...nel.org>,
Alan Stern <stern@...land.harvard.edu>
Subject: Re: Question on smp_mb__before_spinlock
On Wed, 7 Sep 2016 15:23:54 +0200
Peter Zijlstra <peterz@...radead.org> wrote:
> On Wed, Sep 07, 2016 at 10:17:26PM +1000, Nicholas Piggin wrote:
> > > /*
> > > + * This barrier must provide two things:
> > > + *
> > > + * - it must guarantee a STORE before the spin_lock() is ordered against a
> > > + * LOAD after it, see the comments at its two usage sites.
> > > + *
> > > + * - it must ensure the critical section is RCsc.
> > > + *
> > > + * The latter is important for cases where we observe values written by other
> > > + * CPUs in spin-loops, without barriers, while being subject to scheduling.
> > > + *
> > > + * CPU0 CPU1 CPU2
> > > + *
> > > + * for (;;) {
> > > + * if (READ_ONCE(X))
> > > + * break;
> > > + * }
> > > + * X=1
> > > + * <sched-out>
> > > + * <sched-in>
> > > + * r = X;
> > > + *
> > > + * without transitivity it could be that CPU1 observes X!=0 breaks the loop,
> > > + * we get migrated and CPU2 sees X==0.
> > > + *
> > > + * Since most load-store architectures implement ACQUIRE with an smp_mb() after
> > > + * the LL/SC loop, they need no further barriers. Similarly all our TSO
> > > + * architectures imlpy an smp_mb() for each atomic instruction and equally don't
> > > + * need more.
> > > + *
> > > + * Architectures that can implement ACQUIRE better need to take care.
> > > */
> > > +#ifndef smp_mb__after_spinlock
> > > +#define smp_mb__after_spinlock() do { } while (0)
> > > #endif
> >
> > It seems okay, but why not make it a special sched-only function name
> > to prevent it being used in generic code?
> >
> > I would not mind seeing responsibility for the switch barrier moved to
> > generic context switch code like this (alternative for powerpc reducing
> > number of hwsync instructions was to add documentation and warnings about
> > the barriers in arch dependent and independent code). And pairing it with
> > a spinlock is reasonable.
> >
> > It may not strictly be an "smp_" style of barrier if MMIO accesses are to
> > be ordered here too, despite critical section may only be providing
> > acquire/release for cacheable memory, so maybe it's slightly more
> > complicated than just cacheable RCsc?
>
> Interesting idea..
>
> So I'm not a fan of that raw_spin_lock wrapper, since that would end up
> with a lot more boiler-plate code than just the one extra barrier.
#ifndef sched_ctxsw_raw_spin_lock
#define sched_ctxsw_raw_spin_lock(lock) raw_spin_lock(lock)
#endif
#define sched_ctxsw_raw_spin_lock(lock) do { smp_mb() ; raw_spin_lock(lock); } while (0)
?
> But moving MMIO/DMA/TLB etc.. barriers into this spinlock might not be a
> good idea, since those are typically fairly heavy barriers, and its
> quite common to call schedule() without ending up in switch_to().
That's true I guess, but if we already have the arch specific smp_mb__
specifically for this context switch code, and you are asking for them to
implement *cacheable* memory barrier vs migration, then I see no reason
not to allow them to implement uncacheable as well.
You make a good point about schedule() without switch_to(), but
architectures will still have no less flexibility than they do now.
Thanks,
Nick
Powered by blists - more mailing lists