lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49CA88D4.6010808@trash.net>
Date:	Wed, 25 Mar 2009 20:41:08 +0100
From:	Patrick McHardy <kaber@...sh.net>
To:	Eric Dumazet <dada1@...mosbay.com>
CC:	mbizon@...ebox.fr, "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Joakim Tjernlund <Joakim.Tjernlund@...nsmode.se>,
	avorontsov@...mvista.com, netdev@...r.kernel.org,
	Netfilter Developers <netfilter-devel@...r.kernel.org>
Subject: Re: [PATCH] conntrack: use SLAB_DESTROY_BY_RCU for nf_conn structs

Eric Dumazet wrote:
> Patrick McHardy a écrit :
>>>      NF_CT_ASSERT(ct);
>>> +    if (unlikely(!atomic_inc_not_zero(&ct->ct_general.use)))
>>> +        return 0;
>> Can we assume the next pointer still points to the next entry
>> in the same chain after the refcount dropped to zero?
>>
> 
> We are looking chain N.
> If we cannot atomic_inc() refcount, we got some deleted entry.
> If we could atomic_inc, we can meet an entry that just moved to another chain X
> 
> When hitting its end, we continue the search to the N+1 chain so we only 
> skip the end of previous chain (N). We can 'forget' some entries, we can print
> several time one given entry.
> 
> 
> We could solve this by :
> 
> 1) Checking hash value : if not one expected -> 
>    Going back to head of chain N, (potentially re-printing already handled entries)
>    So it is not a *perfect* solution.
> 
> 2) Use a locking to forbid writers (as done in UDP/TCP), but it is expensive and
> wont solve other problem :
> 
> We wont avoid emitting same entry several time anyway (this is a flaw of 
> current seq_file handling, since we 'count' entries to be skiped, and this is
> wrong if some entries were deleted or inserted meanwhile)
> 
> We have same problem on /proc/net/udp & /proc/net/tcp, I am not sure we should care...

I think double entries are not a problem, as you say, there
are already other cases where this can happen. But I think we
should try our best that every entry present at the start and
still present at the end of a dump is also contained in the
dump, otherwise the guantees seem to weak to still be useful.
Your first proposal would do exactly that, right?

> Also, current resizing code can give to a /proc/net/ip_conntrack reader a problem, since
> hash table can switch while its doing its dumping : many entries might be lost or regiven...

Thats true. But its a very rare operation, so I think its mainly
a documentation issue.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ