lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 15 Apr 2010 06:03:54 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Changli Gao <xiaosuo@...il.com>
Cc:	"David S. Miller" <davem@...emloft.net>,
	Tom Herbert <therbert@...gle.com>, netdev@...r.kernel.org
Subject: Re: [PATCH] net: fix softnet_stat

Le jeudi 15 avril 2010 à 11:41 +0800, Changli Gao a écrit :
> On Thu, Apr 15, 2010 at 11:28 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> >> ----
> >>  net/core/dev.c |    5 ++---
> >>  1 file changed, 2 insertions(+), 3 deletions(-)
> >> diff --git a/net/core/dev.c b/net/core/dev.c
> >> index a10a216..b38a991 100644
> >> --- a/net/core/dev.c
> >> +++ b/net/core/dev.c
> >> @@ -2336,7 +2336,6 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu)
> >>       queue = &per_cpu(softnet_data, cpu);
> >>
> >>       local_irq_save(flags);
> >> -     __get_cpu_var(netdev_rx_stat).total++;
> >
> > I think this was nice, because we can see number of frames received by
> > this cpu, before giving them to another cpu.
> >
> 
> :( Packets will be counted again in __net_receive_skb(). Now the
> meaning of total is confusing.

Yes, this should be documented somewhere :)

> 
> > Maybe we want a new counter ?
> >
> >        __get_cpu_var(netdev_rx_stat).inpackets++;
> >
> 
> How about replacing total with received?
> 
> >>
> >>       rps_lock(queue);
> >>       if (queue->input_pkt_queue.qlen <= netdev_max_backlog) {
> >> @@ -2366,9 +2365,9 @@ enqueue:
> >>               goto enqueue;
> >>       }
> >>
> >> +     per_cpu(netdev_rx_stat, cpu).dropped++;
> >
> > This is a bit expensive, and could be a queue->rx_stat.dropped++
> > if netdev_rx_stat is moved inside queue structure.
> >
> 
> It will make softnet_data larger.?

We move a per_cpu struture inside another one. No extra bytes needed.

> 
> >
> >>       rps_unlock(queue);
> >>
> >> -     __get_cpu_var(netdev_rx_stat).dropped++;
> >>       local_irq_restore(flags);
> >>
> >>       kfree_skb(skb);
> >> @@ -3565,7 +3564,7 @@ static int softnet_seq_show(struct seq_file *seq, void *v)
> >>       struct netif_rx_stats *s = v;
> >>
> >>       seq_printf(seq, "%08x %08x %08x %08x %08x %08x %08x %08x %08x %08x\n",
> >> -                s->total, s->dropped, s->time_squeeze, 0,
> >> +                s->total + s->dropped, s->dropped, s->time_squeeze, 0,
> >
> > This is wrong I believe... I prefer to add a new column, s->inpackets ?
> 
> If I rename total to received, is it still confusing?

Keep in mind /proc/net/softnet_stat is an ABI, and first column had the
meaning : number of packets processed by this cpu.

But this was pre RPS time. Now we have a two phases process.


This makes sense to :

let 'total' be the counter of packets processed in upper levels (IP/TCP
stack). As /proc/net/softnet_stat has no header, the 'total' name is not
important and could be changed.

add a new column (after the existing ones in /proc file to keep ABI
compatibility) with the meaning : number of packets pre-processed before
network stacks.

total : number of packets processed in upper layers (IP stack for
example)
dropped: number of packets dropped because this cpu queue was full
time_squeeze: number of time squeeze of this cpu
cpu_collision:...
received_rps: number of time this cpu received a RPS signal from another
cpu
inpackets : number of packets processed in layer 1 by this cpu (RPS
level)


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ