[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1498780894-8253-1-git-send-email-paulmck@linux.vnet.ibm.com>
Date: Thu, 29 Jun 2017 17:01:09 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: linux-kernel@...r.kernel.org
Cc: netfilter-devel@...r.kernel.org, netdev@...r.kernel.org,
oleg@...hat.com, akpm@...ux-foundation.org, mingo@...hat.com,
dave@...olabs.net, manfred@...orfullife.com, tj@...nel.org,
arnd@...db.de, linux-arch@...r.kernel.org, will.deacon@....com,
peterz@...radead.org, stern@...land.harvard.edu,
parri.andrea@...il.com, torvalds@...ux-foundation.org,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Pablo Neira Ayuso <pablo@...filter.org>,
Jozsef Kadlecsik <kadlec@...ckhole.kfki.hu>,
Florian Westphal <fw@...len.de>,
"David S. Miller" <davem@...emloft.net>, <coreteam@...filter.org>
Subject: [PATCH RFC 01/26] netfilter: Replace spin_unlock_wait() with lock/unlock pair
There is no agreed-upon definition of spin_unlock_wait()'s semantics,
and it appears that all callers could do just as well with a lock/unlock
pair. This commit therefore replaces the spin_unlock_wait() calls
in nf_conntrack_lock() and nf_conntrack_all_lock() with spin_lock()
followed immediately by spin_unlock(). These functions do not appear
to be invoked on any fastpaths.
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Pablo Neira Ayuso <pablo@...filter.org>
Cc: Jozsef Kadlecsik <kadlec@...ckhole.kfki.hu>
Cc: Florian Westphal <fw@...len.de>
Cc: "David S. Miller" <davem@...emloft.net>
Cc: <netfilter-devel@...r.kernel.org>
Cc: <coreteam@...filter.org>
Cc: <netdev@...r.kernel.org>
Cc: Will Deacon <will.deacon@....com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Alan Stern <stern@...land.harvard.edu>
Cc: Andrea Parri <parri.andrea@...il.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
---
net/netfilter/nf_conntrack_core.c | 26 ++++++++------------------
1 file changed, 8 insertions(+), 18 deletions(-)
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index e847dbaa0c6b..9f997859d160 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -99,15 +99,11 @@ void nf_conntrack_lock(spinlock_t *lock) __acquires(lock)
spin_lock(lock);
while (unlikely(nf_conntrack_locks_all)) {
spin_unlock(lock);
-
- /*
- * Order the 'nf_conntrack_locks_all' load vs. the
- * spin_unlock_wait() loads below, to ensure
- * that 'nf_conntrack_locks_all_lock' is indeed held:
- */
- smp_rmb(); /* spin_lock(&nf_conntrack_locks_all_lock) */
- spin_unlock_wait(&nf_conntrack_locks_all_lock);
+ /* Wait for nf_conntrack_locks_all_lock holder to release ... */
+ spin_lock(&nf_conntrack_locks_all_lock);
+ spin_unlock(&nf_conntrack_locks_all_lock);
spin_lock(lock);
+ /* ... and retry. */
}
}
EXPORT_SYMBOL_GPL(nf_conntrack_lock);
@@ -150,17 +146,11 @@ static void nf_conntrack_all_lock(void)
spin_lock(&nf_conntrack_locks_all_lock);
nf_conntrack_locks_all = true;
-
- /*
- * Order the above store of 'nf_conntrack_locks_all' against
- * the spin_unlock_wait() loads below, such that if
- * nf_conntrack_lock() observes 'nf_conntrack_locks_all'
- * we must observe nf_conntrack_locks[] held:
- */
- smp_mb(); /* spin_lock(&nf_conntrack_locks_all_lock) */
-
for (i = 0; i < CONNTRACK_LOCKS; i++) {
- spin_unlock_wait(&nf_conntrack_locks[i]);
+ /* Wait for any current holder to release lock. */
+ spin_lock(&nf_conntrack_locks[i]);
+ spin_unlock(&nf_conntrack_locks[i]);
+ /* Next acquisition will see nf_conntrack_locks_all == true. */
}
}
--
2.5.2
Powered by blists - more mailing lists