[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A5DF5B4.5090809@trash.net>
Date: Wed, 15 Jul 2009 17:28:52 +0200
From: Patrick McHardy <kaber@...sh.net>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
paulmck@...ux.vnet.ibm.com
Subject: Re: [PATCH] net: nf_conntrack_alloc() should not use kmem_cache_zalloc()
Eric Dumazet wrote:
> [PATCH] net: nf_conntrack_alloc() should not use kmem_cache_zalloc()
>
> When a slab cache uses SLAB_DESTROY_BY_RCU, we must be careful when allocating
> objects, since slab allocator could give a freed object still used by lockless
> readers.
>
> In particular, nf_conntrack RCU lookups rely on ct->tuplehash[xxx].hnnode.next
> being always valid (ie containing a valid 'nulls' value, or a valid pointer to next
> object in hash chain.)
>
> kmem_cache_zalloc() setups object with NULL values, but a NULL value is not valid
> for ct->tuplehash[xxx].hnnode.next.
>
> Fix is to call kmem_cache_alloc() and do the zeroing ourself.
I think this is still racy, please see below:
> diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
> index 7508f11..23feafa 100644
> --- a/net/netfilter/nf_conntrack_core.c
> +++ b/net/netfilter/nf_conntrack_core.c
> @@ -561,17 +561,28 @@ struct nf_conn *nf_conntrack_alloc(struct net *net,
> }
> }
>
> - ct = kmem_cache_zalloc(nf_conntrack_cachep, gfp);
> + /*
> + * Do not use kmem_cache_zalloc(), as this cache uses
> + * SLAB_DESTROY_BY_RCU.
> + */
> + ct = kmem_cache_alloc(nf_conntrack_cachep, gfp);
> if (ct == NULL) {
> pr_debug("nf_conntrack_alloc: Can't alloc conntrack.\n");
> atomic_dec(&net->ct.count);
> return ERR_PTR(-ENOMEM);
> }
> -
__nf_conntrack_find() on another CPU finds the entry at this point.
> + /*
> + * Let ct->tuplehash[IP_CT_DIR_ORIGINAL].hnnode.next
> + * and ct->tuplehash[IP_CT_DIR_REPLY].hnnode.next unchanged.
> + */
> + memset(&ct->tuplehash[IP_CT_DIR_MAX], 0,
> + sizeof(*ct) - offsetof(struct nf_conn, tuplehash[IP_CT_DIR_MAX]));
> spin_lock_init(&ct->lock);
> atomic_set(&ct->ct_general.use, 1);
nf_conntrack_find_get() successfully tries to atomic_inc_not_zero()
at this point, following by another tuple comparison which is also
successful.
Am I missing something? I think we need to make sure the reference
count is not increased until the new tuples are visible.
> ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple = *orig;
> + ct->tuplehash[IP_CT_DIR_ORIGINAL].hnnode.pprev = NULL;
> ct->tuplehash[IP_CT_DIR_REPLY].tuple = *repl;
> + ct->tuplehash[IP_CT_DIR_REPLY].hnnode.pprev = NULL;
> /* Don't set timer yet: wait for confirmation */
> setup_timer(&ct->timeout, death_by_timeout, (unsigned long)ct);
> #ifdef CONFIG_NET_NS
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists