lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20160526135406.GK3192@twins.programming.kicks-ass.net>
Date:	Thu, 26 May 2016 15:54:06 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Manfred Spraul <manfred@...orfullife.com>
Cc:	linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
	dave@...olabs.net, paulmck@...ux.vnet.ibm.com, will.deacon@....com,
	boqun.feng@...il.com, Waiman.Long@....com, tj@...nel.org,
	pablo@...filter.org, kaber@...sh.net, davem@...emloft.net,
	oleg@...hat.com, netfilter-devel@...r.kernel.org,
	sasha.levin@...cle.com, hofrat@...dl.org
Subject: Re: [RFC][PATCH 3/3] locking,netfilter: Fix nf_conntrack_lock()

On Tue, May 24, 2016 at 10:41:43PM +0200, Manfred Spraul wrote:
> >--- a/net/netfilter/nf_conntrack_core.c
> >+++ b/net/netfilter/nf_conntrack_core.c
> >@@ -74,7 +74,18 @@ void nf_conntrack_lock(spinlock_t *lock)
> >  	spin_lock(lock);
> >  	while (unlikely(nf_conntrack_locks_all)) {
> >  		spin_unlock(lock);
> >+		/*
> >+		 * Order the nf_contrack_locks_all load vs the spin_unlock_wait()
> >+		 * loads below, to ensure locks_all is indeed held.
> >+		 */
> >+		smp_rmb(); /* spin_lock(locks_all) */
> >  		spin_unlock_wait(&nf_conntrack_locks_all_lock);

> I don't understand the comment, and I don't understand why the smp_rmb() is
> required: What do you want to protect against?

I order against the spin_unlock_wait() load of
nf_conntrack_locks_all_lock being done _before_ the
nf_conntrack_locks_all load.

This is possible because the spin_unlock() in between will stop the
nf_conntrack_locks_all load from falling down, but it doesn't stop the
&nf_conntrack_locks_all_lock lock from being done early.

Inverting these two loads could result in not waiting when we should.


	nf_conntrack_all_lock()		nf_conntrack_lock()

					  [L] all_locks == unlocked
	  [S] spin_lock(&all_lock);
	  [S] all = true;
					  [L] all == true

And we'll not wait for the current all_lock section to finish.
Resulting in an all_lock and lock section at the same time.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ