[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1472742257-10515-1-git-send-email-manfred@colorfullife.com>
Date: Thu, 1 Sep 2016 17:04:10 +0200
From: Manfred Spraul <manfred@...orfullife.com>
To: benh@...nel.crashing.org, paulmck@...ux.vnet.ibm.com,
Ingo Molnar <mingo@...e.hu>, Boqun Feng <boqun.feng@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: LKML <linux-kernel@...r.kernel.org>, will.deacon@....com,
1vier1@....de, Davidlohr Bueso <dave@...olabs.net>,
Manfred Spraul <manfred@...orfullife.com>
Subject: [PATCH 0/7 V6] Clarify/standardize memory barriers for lock/unlock
Hi,
Based on the new consensus:
- spin_unlock_wait() is spin_lock();spin_unlock();
- no guarantees are provided by spin_is_locked().
- the acquire during spin_lock() is for the load, not for the store.
Summary:
If a high-scalability locking scheme is built with multiple
spinlocks, then often additional memory barriers are required.
The documentation was not as clear as possible, and memory
barriers were missing / superfluous in the implementation.
Patch 1: sem.c: Remove the smp_rmb() after spin_unlock_wait().
Patch 2: Documentation
Patch 3: Update ipc/sem.c based on rules above
Patch 4: Move smp_mb__after_unlock_lock to <linux/spinlock.h>
Patch 5: Fix memory ordering for nf_conntrack
Patch 6: nf_conntrack: Remove smp_rmb() after spin_unlock_wait()
Patch 7: nf_conntrack: Remove smp_mb() after spin_lock().
Patch 5 is larger than required, it rewrites the conntrack logic
with the code from ipc/sem.c. I think the new code is simpler
and more realtime-friendly.
@netfilter team: Over which tree should the patch be sent?
Usually, I ask Andrew to merge my patches (as there is no
maintainer tree for ipc).
@Andrew: The patches are relative to mmots.
Could you include them in your tree, with the target of including in
linux-next?
--
Manfred
Powered by blists - more mailing lists