[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20130425.042007.1583080085524610665.davem@davemloft.net>
Date: Thu, 25 Apr 2013 04:20:07 -0400 (EDT)
From: David Miller <davem@...emloft.net>
To: willemb@...gle.com
Cc: eric.dumazet@...il.com, netdev@...r.kernel.org,
stephen@...workplumber.org
Subject: Re: [PATCH net-next v5] rps: selective flow shedding during
softnet overflow
From: Willem de Bruijn <willemb@...gle.com>
Date: Tue, 23 Apr 2013 20:37:27 -0400
> A cpu executing the network receive path sheds packets when its input
> queue grows to netdev_max_backlog. A single high rate flow (such as a
> spoofed source DoS) can exceed a single cpu processing rate and will
> degrade throughput of other flows hashed onto the same cpu.
>
> This patch adds a more fine grained hashtable. If the netdev backlog
> is above a threshold, IRQ cpus track the ratio of total traffic of
> each flow (using 4096 buckets, configurable). The ratio is measured
> by counting the number of packets per flow over the last 256 packets
> from the source cpu. Any flow that occupies a large fraction of this
> (set at 50%) will see packet drop while above the threshold.
>
> Tested:
> Setup is a muli-threaded UDP echo server with network rx IRQ on cpu0,
> kernel receive (RPS) on cpu0 and application threads on cpus 2--7
> each handling 20k req/s. Throughput halves when hit with a 400 kpps
> antagonist storm. With this patch applied, antagonist overload is
> dropped and the server processes its complete load.
>
> The patch is effective when kernel receive processing is the
> bottleneck. The above RPS scenario is a extreme, but the same is
> reached with RFS and sufficient kernel processing (iptables, packet
> socket tap, ..).
>
> Signed-off-by: Willem de Bruijn <willemb@...gle.com>
This does't compile:
net/core/sysctl_net_core.c: In function ‘flow_limit_cpu_sysctl’:
net/core/sysctl_net_core.c:114:10: error: invalid type argument of ‘->’ (have ‘struct mutex’)
Also, please change the Kconfig entry to be:
config NET_FLOW_LIMIT
boolean
depends on RPS
default y
like RPS et al. are.
Thanks.
Powered by blists - more mailing lists