[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151016190648.GC3816@twins.programming.kicks-ass.net>
Date: Fri, 16 Oct 2015 21:06:48 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Catalin Marinas <catalin.marinas@....com>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Will Deacon <will.deacon@....com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Oleg Nesterov <oleg@...hat.com>, Ingo Molnar <mingo@...nel.org>
Subject: Re: Q: schedule() and implied barriers on arm64
On Fri, Oct 16, 2015 at 05:55:35PM +0100, Catalin Marinas wrote:
> arm64 indeed does not have a dmb after spin_lock, it only has a
> load-acquire. So with the default smp_mb__before_spinlock() +
> spin_lock we have:
>
> smp_wmb()
> loop
> load-acquire
> store
>
> So (I think) this guarantees that any writes before wmb+lock would be
> visible before any reads _and_ writes after wmb+lock. However, the
> ordering with reads before wmb+lock is not guaranteed.
That is my understanding as well, and stores could creep up from below
the unlock and then the reads and those stores can cross and you've
lost.
In any case, its all moot now, since Paul no longer requires schedule()
to imply a full barrier.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists