lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 12 Aug 2016 10:59:46 +0800
From:	Boqun Feng <boqun.feng@...il.com>
To:	Davidlohr Bueso <dave@...olabs.net>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Manfred Spraul <manfred@...orfullife.com>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	Michael Ellerman <mpe@...erman.id.au>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Susanne Spraul <1vier1@....de>, parri.andrea@...il.com
Subject: Re: spin_lock implicit/explicit memory barrier

On Thu, Aug 11, 2016 at 11:31:06AM -0700, Davidlohr Bueso wrote:
> On Thu, 11 Aug 2016, Peter Zijlstra wrote:
> 
> > On Wed, Aug 10, 2016 at 04:29:22PM -0700, Davidlohr Bueso wrote:
> > 
> > > (1) As Manfred suggested, have a patch 1 that fixes the race against mainline
> > > with the redundant smp_rmb, then apply a second patch that gets rid of it
> > > for mainline, but only backport the original patch 1 down to 3.12.
> > 
> > I have not followed the thread closely, but this seems like the best
> > option. Esp. since 726328d92a42 ("locking/spinlock, arch: Update and fix
> > spin_unlock_wait() implementations") is incomplete, it relies on at
> > least 6262db7c088b ("powerpc/spinlock: Fix spin_unlock_wait()") to sort
> > PPC.
> 
> Yeah, and we'd also need the arm bits; which reminds me, aren't alpha
> ldl_l/stl_c sequences also exposed to this delaying of the publishing
> when a non-owner peeks at the lock? Right now sysv sem's would be busted
> when doing either is_locked or unlock_wait, shouldn't these be pimped up
> to full smp_mb()s?
> 

You are talking about a similar problem as this one:

http://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1018307.html

right?

The trick of this problem is whether the barrier or operation in
spin_lock() could order the STORE part of the lock-acquire with memory
operations in critical sections.

On PPC, we use lwsync, which doesn't order STORE->LOAD, so there is
problem. On ARM64 and qspinlock in x86, there are similiar reasons.

But if an arch implements its spin_lock() with a full barrier, even
though the atomic is implemented by ll/sc, the STORE part of which can't
be reordered with memory operations in the critcal sections. I think
maybe that's the case for alpha(and also for ARM32).

Regards,
Boqun

> Thanks,
> Davidlohr
> 

Download attachment "signature.asc" of type "application/pgp-signature" (474 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ