lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160905101021.GC2649@arm.com>
Date:   Mon, 5 Sep 2016 11:10:22 +0100
From:   Will Deacon <will.deacon@....com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Oleg Nesterov <oleg@...hat.com>,
        Paul McKenney <paulmck@...ux.vnet.ibm.com>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        linux-kernel@...r.kernel.org, Nicholas Piggin <npiggin@...il.com>,
        Ingo Molnar <mingo@...nel.org>,
        Alan Stern <stern@...land.harvard.edu>
Subject: Re: Question on smp_mb__before_spinlock

On Mon, Sep 05, 2016 at 11:37:53AM +0200, Peter Zijlstra wrote:
> So recently I've had two separate issues that touched upon
> smp_mb__before_spinlock().
> 
> 
> Since its inception, our understanding of ACQUIRE, esp. as applied to
> spinlocks, has changed somewhat. Also, I wonder if, with a simple
> change, we cannot make it provide more.
> 
> The problem with the comment is that the STORE done by spin_lock isn't
> itself ordered by the ACQUIRE, and therefore a later LOAD can pass over
> it and cross with any prior STORE, rendering the default WMB
> insufficient (pointed out by Alan).
> 
> Now, this is only really a problem on PowerPC and ARM64, the former of
> which already defined smp_mb__before_spinlock() as a smp_mb(), the
> latter does not, Will?

I just replied to that thread and, assuming I've groked the sched/core.c
usage correctly, then it does look like we need to make that an smp_mb()
with the current code.

> The second issue I wondered about is spinlock transitivity. All except
> powerpc have RCsc locks, and since Power already does a full mb, would
> it not make sense to put it _after_ the spin_lock(), which would provide
> the same guarantee, but also upgrades the section to RCsc.
> 
> That would make all schedule() calls fully transitive against one
> another.

It would also match the way in which the arm64 atomic_*_return ops
are implemented, since full barrier semantics are required there.

> That is, would something like the below make sense?

Works for me, but I'll do a fix to smp_mb__before_spinlock anyway for
the stable tree.

The only slight annoyance is that, on arm64 anyway, a store-release
appearing in program order before the LOCK operation will be observed
in order, so if the write of CONDITION=1 in the try_to_wake_up case
used smp_store_release, we wouldn't need this barrier at all.

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ