lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 12 Aug 2016 20:43:55 +0200
From:	Manfred Spraul <manfred@...orfullife.com>
To:	Boqun Feng <boqun.feng@...il.com>,
	Davidlohr Bueso <dave@...olabs.net>
Cc:	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	Michael Ellerman <mpe@...erman.id.au>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Susanne Spraul <1vier1@....de>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Peter Zijlstra <peterz@...radead.org>
Subject: Re: spin_lock implicit/explicit memory barrier

Hi Boqun,

On 08/12/2016 04:47 AM, Boqun Feng wrote:
>> We should not be doing an smp_mb() right after a spin_lock(), makes no sense. The
>> spinlock machinery should guarantee us the barriers in the unorthodox locking cases,
>> such as this.
>>
Do we really want to go there?
Trying to handle all unorthodox cases will end up as an endless list of 
patches, and guaranteed to be stale architectures.

> Right.
>
> If you have:
>
> 6262db7c088b ("powerpc/spinlock: Fix spin_unlock_wait()")
>
> you don't need smp_mb() after spin_lock() on PPC.
>
> And, IIUC, if you have:
>
> 3a5facd09da8 ("arm64: spinlock: fix spin_unlock_wait for LSE atomics")
> d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against
> concurrent lockers")
>
> you don't need smp_mb() after spin_lock() on ARM64.
>
> And, IIUC, if you have:
>
> 2c6100227116 ("locking/qspinlock: Fix spin_unlock_wait() some more")
>
> you don't need smp_mb() after spin_lock() on x86 with qspinlock.

I would really prefer the other approach:
- spin_lock() is an acquire, that's it. No further guarantees, e.g. 
ordering of writing the lock.
- spin_unlock() is a release, that's it.
- generic smp_mb__after_before_whatever(). And architectures can 
override the helpers.
E.g. if qspinlocks on x86 can implement the smp_mb__after_spin_lock() 
for free, then the helper can be a nop.

Right now, we start to hardcode something into the architectures - for 
some callers.
Other callers use solutions such as smp_mb__after_unlock_lock(), i.e. 
arch dependent workarounds in arch independent code.

And: We unnecessarily add overhead.
Both ipc/sem and netfilter do loops over many spinlocks:
>        for (i = 0; i < CONNTRACK_LOCKS; i++) {
>                 spin_unlock_wait(&nf_conntrack_locks[i]);
>         }
One memory barrier would be sufficient, but due to embedding we end up 
with CONNTRACK_LOCKS barriers.

Should I create a patch?
(i.e. documentation and generic helpers)

--
     Manfred

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ