lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 17 Jun 2009 14:33:50 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Patrick McHardy <kaber@...sh.net>
CC:	Ingo Molnar <mingo@...e.hu>, David Miller <davem@...emloft.net>,
	Thomas Gleixner <tglx@...utronix.de>,
	torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [bug] __nf_ct_refresh_acct(): WARNING: at lib/list_debug.c:30
 __list_add+0x7d/0xad()

Patrick McHardy a écrit :
> Eric Dumazet wrote:
>> Patrick McHardy a écrit :
>>> Before the conntrack is confirmed, it is exclusively handled by a
>>> single CPU. I agree that we need to make sure the IPS_CONFIRMED_BIT
>>> is visible before we add the conntrack to the hash table since the
>>> lookup is lockless, but simply moving the set_bit before the hash
>>> insertion should be fine I think.
>>>
>>
>>
>> Problem is timeout.expires is either a relative or absolute timeout,
>> and changes happen
>> in __nf_conntrack_confirm() or __nf_ct_refresh_acct().
>>
>> We must have a synchronization (an barriers), a single bit wont be
>> enough.
> 
> Please have a look at the second patch I just sent. It relies
> on the RCU barriers to make sure all stores are visible before
> other CPUs can find the conntrack.
> 

Sorry, I dont understand how your second patch corrects the problem.

This (unconfirmed) conntrack is visible by another cpu. This other
cpu can call __nf_ct_refresh_acct() while this cpu runs
in __nf_conntrack_confirm()

@@ -425,7 +425,6 @@ __nf_conntrack_confirm(struct sk_buff *skb)
 	/* Remove from unconfirmed list */
 	hlist_nulls_del_rcu(&ct->tuplehash[IP_CT_DIR_ORIGINAL].hnnode);
 
-	__nf_conntrack_hash_insert(ct, hash, repl_hash);
 	/* Timer relative to confirmation time, not original
 	   setting time, otherwise we'd get timer wrap in
 	   weird delay cases. */
@@ -433,8 +432,15 @@ __nf_conntrack_confirm(struct sk_buff *skb)
 	add_timer(&ct->timeout);

<<<< another cpu could here change timeout.expires (thinking its still relative) >>>>

 	atomic_inc(&ct->ct_general.use);
 	set_bit(IPS_CONFIRMED_BIT, &ct->status);
+
+	/* Since the lookup is lockless, hash insertion must be after starting the
+	 * timer and setting the CONFIRMED bit. The RCU barriers guarantee that no
+	 * other CPU can find the conntrack before the above stores are visible.
+	 */
+	__nf_conntrack_hash_insert(ct, hash, repl_hash);
 	NF_CT_STAT_INC(net, insert);
 	spin_unlock_bh(&nf_conntrack_lock);
 	help = nfct_help(ct);
 	if (help && help->helper)
 		nf_conntrack_event_cache(IPCT_HELPER, ct);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ