lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20160524144228.GA15189@worktop.bitpit.net> Date: Tue, 24 May 2016 16:42:28 +0200 From: Peter Zijlstra <peterz@...radead.org> To: linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org, manfred@...orfullife.com, dave@...olabs.net, paulmck@...ux.vnet.ibm.com, will.deacon@....com Cc: boqun.feng@...il.com, Waiman.Long@....com, tj@...nel.org, pablo@...filter.org, kaber@...sh.net, davem@...emloft.net, oleg@...hat.com, netfilter-devel@...r.kernel.org, sasha.levin@...cle.com, hofrat@...dl.org Subject: Re: [RFC][PATCH 3/3] locking,netfilter: Fix nf_conntrack_lock() On Tue, May 24, 2016 at 04:27:26PM +0200, Peter Zijlstra wrote: > nf_conntrack_lock{,_all}() is borken as it misses a bunch of memory > barriers to order the whole global vs local locks scheme. > > Even x86 (and other TSO archs) are affected. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org> > --- > net/netfilter/nf_conntrack_core.c | 30 +++++++++++++++++++++++++++++- > 1 file changed, 29 insertions(+), 1 deletion(-) > > --- a/net/netfilter/nf_conntrack_core.c > +++ b/net/netfilter/nf_conntrack_core.c > @@ -74,7 +74,18 @@ void nf_conntrack_lock(spinlock_t *lock) > spin_lock(lock); > while (unlikely(nf_conntrack_locks_all)) { And note that we can replace nf_conntrack_locks_all with spin_is_locked(nf_conntrack_locks_all_lock), since that is the exact same state. But I didn't want to do too much in one go. > spin_unlock(lock); > + /* > + * Order the nf_contrack_locks_all load vs the spin_unlock_wait() > + * loads below, to ensure locks_all is indeed held. > + */ > + smp_rmb(); /* spin_lock(locks_all) */ > spin_unlock_wait(&nf_conntrack_locks_all_lock); > + /* > + * The control dependency's LOAD->STORE order is enough to > + * guarantee the spin_lock() is ordered after the above > + * unlock_wait(). And the ACQUIRE of the lock ensures we are > + * fully ordered against the spin_unlock() of locks_all. > + */ > spin_lock(lock); > } > }
Powered by blists - more mailing lists