[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090127222837.4ea8b255@extreme>
Date: Tue, 27 Jan 2009 22:28:37 -0800
From: Stephen Hemminger <shemminger@...tta.com>
To: Eric Dumazet <dada1@...mosbay.com>
Cc: David Miller <davem@...emloft.net>,
Patrick McHardy <kaber@...sh.net>, netdev@...r.kernel.org,
netfilter-devel@...r.kernel.org
Subject: Re: [RFT 3/4] netfilter: use sequence number synchronization for
counters
On Wed, 28 Jan 2009 07:17:04 +0100
Eric Dumazet <dada1@...mosbay.com> wrote:
> Stephen Hemminger a écrit :
> > Change how synchronization is done on the iptables counters. Use seqcount
> > wrapper instead of depending on reader/writer lock.
> >
> > Signed-off-by: Stephen Hemminger <shemminger@...tta.com>
> >
> >
> >
> > --- a/net/ipv4/netfilter/ip_tables.c 2009-01-27 14:48:41.567879095 -0800
> > +++ b/net/ipv4/netfilter/ip_tables.c 2009-01-27 15:45:05.766673246 -0800
> > @@ -366,7 +366,9 @@ ipt_do_table(struct sk_buff *skb,
> > if (IPT_MATCH_ITERATE(e, do_match, skb, &mtpar) != 0)
> > goto no_match;
> >
> > + write_seqcount_begin(&e->seq);
> > ADD_COUNTER(e->counters, ntohs(ip->tot_len), 1);
> > + write_seqcount_end(&e->seq);
> >
> Its not very good to do it like this, (one seqcount_t per rule per cpu)
If we use one count per table, that solves it, but it becomes a hot
spot, and on an active machine will never settle.
> >
> > t = ipt_get_target(e);
> > IP_NF_ASSERT(t->u.kernel.target);
> > @@ -758,6 +760,7 @@ check_entry_size_and_hooks(struct ipt_en
> > < 0 (not IPT_RETURN). --RR */
> >
> > /* Clear counters and comefrom */
> > + seqcount_init(&e->seq);
> > e->counters = ((struct xt_counters) { 0, 0 });
> > e->comefrom = 0;
> >
> > @@ -915,14 +918,17 @@ get_counters(const struct xt_table_info
> > &i);
> >
> > for_each_possible_cpu(cpu) {
> > + struct ipt_entry *e = t->entries[cpu];
> > + unsigned int start;
> > +
> > if (cpu == curcpu)
> > continue;
> > i = 0;
> > - IPT_ENTRY_ITERATE(t->entries[cpu],
> > - t->size,
> > - add_entry_to_counter,
> > - counters,
> > - &i);
> > + do {
> > + start = read_seqcount_begin(&e->seq);
> > + IPT_ENTRY_ITERATE(e, t->size,
> > + add_entry_to_counter, counters, &i);
> > + } while (read_seqcount_retry(&e->seq, start));
> >
> This will never complete on a loaded machine and a big set of rules.
> When we reach the end of IPT_ENTRY_ITERATE, we notice many packets came
> while doing the iteration and restart,
> with wrong accumulated values (no rollback of what was done to accumulator)
>
> You want to do the seqcount_begin/end in the leaf function
> (add_entry_to_counter()), and make accumulate a value pair (bytes/counter)
> only once you are sure they are correct.
>
> Using one seqcount_t per rule (struct ipt_entry) is very expensive.
> (This is 4 bytes per rule X num_possible_cpus())
>
> You need one seqcount_t per cpu
The other option would be swapping counters and using rcu, but that adds lots of
RCU synchronization, and RCU sync overhead only seems to be growing.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists