[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170816041418.GB24294@blaptop>
Date: Wed, 16 Aug 2017 13:14:18 +0900
From: Minchan Kim <minchan@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Nadav Amit <namit@...are.com>, Ingo Molnar <mingo@...nel.org>,
Stephen Rothwell <sfr@...b.auug.org.au>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, "H. Peter Anvin" <hpa@...or.com>,
Linux-Next Mailing List <linux-next@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linus <torvalds@...ux-foundation.org>
Subject: Re: linux-next: manual merge of the akpm-current tree with the tip
tree
On Mon, Aug 14, 2017 at 09:57:23PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 14, 2017 at 05:38:39PM +0900, Minchan Kim wrote:
> > memory-barrier.txt always scares me. I have read it for a while
> > and IIUC, it seems semantic of spin_unlock(&same_pte) would be
> > enough without some memory-barrier inside mm_tlb_flush_nested.
>
> Indeed, see the email I just send. Its both spin_lock() and
> spin_unlock() that we care about.
>
> Aside from the semi permeable barrier of these primitives, RCpc ensures
> these orderings only work against the _same_ lock variable.
>
> Let me try and explain the ordering for PPC (which is by far the worst
> we have in this regard):
>
>
> spin_lock(lock)
> {
> while (test_and_set(lock))
> cpu_relax();
> lwsync();
> }
>
>
> spin_unlock(lock)
> {
> lwsync();
> clear(lock);
> }
>
> Now LWSYNC has fairly 'simple' semantics, but with fairly horrible
> ramifications. Consider LWSYNC to provide _local_ TSO ordering, this
> means that it allows 'stores reordered after loads'.
>
> For the spin_lock() that implies that all load/store's inside the lock
> do indeed stay in, but the ACQUIRE is only on the LOAD of the
> test_and_set(). That is, the actual _set_ can leak in. After all it can
> re-order stores after load (inside the lock).
>
> For unlock it again means all load/store's prior stay prior, and the
> RELEASE is on the store clearing the lock state (nothing surprising
> here).
>
> Now the _local_ part, the main take-away is that these orderings are
> strictly CPU local. What makes the spinlock work across CPUs (as we'd
> very much expect it to) is the address dependency on the lock variable.
>
> In order for the spin_lock() to succeed, it must observe the clear. Its
> this link that crosses between the CPUs and builds the ordering. But
> only the two CPUs agree on this order. A third CPU not involved in
> this transaction can disagree on the order of events.
The detail explanation in your previous reply makes me comfortable
from scary memory-barrier.txt but this reply makes me scared again. ;-)
Thanks for the kind clarification, Peter!
Powered by blists - more mailing lists