lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 6 Sep 2016 13:17:53 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Will Deacon <will.deacon@....com>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Oleg Nesterov <oleg@...hat.com>,
        Paul McKenney <paulmck@...ux.vnet.ibm.com>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        linux-kernel@...r.kernel.org, Nicholas Piggin <npiggin@...il.com>,
        Ingo Molnar <mingo@...nel.org>,
        Alan Stern <stern@...land.harvard.edu>
Subject: Re: Question on smp_mb__before_spinlock

On Mon, Sep 05, 2016 at 11:10:22AM +0100, Will Deacon wrote:

> > The second issue I wondered about is spinlock transitivity. All except
> > powerpc have RCsc locks, and since Power already does a full mb, would
> > it not make sense to put it _after_ the spin_lock(), which would provide
> > the same guarantee, but also upgrades the section to RCsc.
> > 
> > That would make all schedule() calls fully transitive against one
> > another.
> 
> It would also match the way in which the arm64 atomic_*_return ops
> are implemented, since full barrier semantics are required there.

Hmm, are you sure; the way I read arch/arm64/include/asm/atomic_ll_sc.h
is that you do ll/sc-rel + mb.

> > That is, would something like the below make sense?
> 
> Works for me, but I'll do a fix to smp_mb__before_spinlock anyway for
> the stable tree.

Indeed, thanks!

> 
> The only slight annoyance is that, on arm64 anyway, a store-release
> appearing in program order before the LOCK operation will be observed
> in order, so if the write of CONDITION=1 in the try_to_wake_up case
> used smp_store_release, we wouldn't need this barrier at all.

Right, but this is because your load-acquire and store-release are much
stronger than Linux's. Not only are they RCsc, they are also globally
ordered irrespective of the variable (iirc).

This wouldn't work for PPC (even if we could find all such prior
stores).

OK, I suppose I'll go stare what we can do about the mm_types.h use and
spin a patch with Changelog.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ