lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <497FF860.9080406@cosmosbay.com>
Date:	Wed, 28 Jan 2009 07:17:04 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Stephen Hemminger <shemminger@...tta.com>
CC:	David Miller <davem@...emloft.net>,
	Patrick McHardy <kaber@...sh.net>, netdev@...r.kernel.org,
	netfilter-devel@...r.kernel.org
Subject: Re: [RFT 3/4] netfilter: use sequence number synchronization for
 counters

Stephen Hemminger a écrit :
> Change how synchronization is done on the iptables counters. Use seqcount
> wrapper instead of depending on reader/writer lock.
>
> Signed-off-by: Stephen Hemminger <shemminger@...tta.com>
>
>
>   
> --- a/net/ipv4/netfilter/ip_tables.c	2009-01-27 14:48:41.567879095 -0800
> +++ b/net/ipv4/netfilter/ip_tables.c	2009-01-27 15:45:05.766673246 -0800
> @@ -366,7 +366,9 @@ ipt_do_table(struct sk_buff *skb,
>  			if (IPT_MATCH_ITERATE(e, do_match, skb, &mtpar) != 0)
>  				goto no_match;
>  
> +			write_seqcount_begin(&e->seq);
>  			ADD_COUNTER(e->counters, ntohs(ip->tot_len), 1);
> +			write_seqcount_end(&e->seq);
>   
Its not very good to do it like this, (one seqcount_t per rule per cpu)

>  
>  			t = ipt_get_target(e);
>  			IP_NF_ASSERT(t->u.kernel.target);
> @@ -758,6 +760,7 @@ check_entry_size_and_hooks(struct ipt_en
>  	   < 0 (not IPT_RETURN). --RR */
>  
>  	/* Clear counters and comefrom */
> +	seqcount_init(&e->seq);
>  	e->counters = ((struct xt_counters) { 0, 0 });
>  	e->comefrom = 0;
>  
> @@ -915,14 +918,17 @@ get_counters(const struct xt_table_info 
>  			  &i);
>  
>  	for_each_possible_cpu(cpu) {
> +		struct ipt_entry *e = t->entries[cpu];
> +		unsigned int start;
> +
>  		if (cpu == curcpu)
>  			continue;
>  		i = 0;
> -		IPT_ENTRY_ITERATE(t->entries[cpu],
> -				  t->size,
> -				  add_entry_to_counter,
> -				  counters,
> -				  &i);
> +		do {
> +			start = read_seqcount_begin(&e->seq);
> +			IPT_ENTRY_ITERATE(e, t->size,
> +					  add_entry_to_counter, counters, &i);
> +		} while (read_seqcount_retry(&e->seq, start));
>   
This will never complete on a loaded machine and a big set of rules.
When we reach the end of IPT_ENTRY_ITERATE, we notice many packets came 
while doing the iteration and restart,
with wrong accumulated values (no rollback of what was done to accumulator)

You want to do the seqcount_begin/end in the leaf function 
(add_entry_to_counter()), and make accumulate a value pair (bytes/counter)
only once you are sure they are correct.

Using one seqcount_t per rule (struct ipt_entry) is very expensive. 
(This is 4 bytes per rule X num_possible_cpus())

You need one seqcount_t per cpu


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ