[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200914172453.1833883-1-weiwan@google.com>
Date: Mon, 14 Sep 2020 10:24:47 -0700
From: Wei Wang <weiwan@...gle.com>
To: "David S . Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Cc: Jakub Kicinski <kuba@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
Felix Fietkau <nbd@....name>, Wei Wang <weiwan@...gle.com>
Subject: [RFC PATCH net-next 0/6] implement kthread based napi poll
The idea of moving the napi poll process out of softirq context to a
kernel thread based context is not new.
Paolo Abeni and Hannes Frederic Sowa has proposed patches to move napi
poll to kthread back in 2016. And Felix Fietkau has also proposed
patches of similar ideas to use workqueue to process napi poll just a
few weeks ago.
The main reason we'd like to push forward with this idea is that the
scheduler has poor visibility into cpu cycles spent in softirq context,
and is not able to make optimal scheduling decisions of the user threads.
For example, we see in one of the application benchmark where network
load is high, the CPUs handling network softirqs has ~80% cpu util. And
user threads are still scheduled on those CPUs, despite other more idle
cpus available in the system. And we see very high tail latencies. In this
case, we have to explicitly pin away user threads from the CPUs handling
network softirqs to ensure good performance.
With napi poll moved to kthread, scheduler is in charge of scheduling both
the kthreads handling network load, and the user threads, and is able to
make better decisions. In the previous benchmark, if we do this and we
pin the kthreads processing napi poll to specific CPUs, scheduler is
able to schedule user threads away from these CPUs automatically.
And the reason we prefer 1 kthread per napi, instead of 1 workqueue
entity per host, is that kthread is more configurable than workqueue,
and we could leverage existing tuning tools for threads, like taskset,
chrt, etc to tune scheduling class and cpu set, etc. Another reason is
if we eventually want to provide busy poll feature using kernel threads
for napi poll, kthread seems to be more suitable than workqueue.
In this patch series, I revived Paolo and Hannes's patch in 2016 and
left them as the first 2 patches. Then there are changes proposed by
Felix, Jakub, Paolo and myself on top of those, with suggestions from
Eric Dumazet.
In terms of performance, I ran tcp_rr tests with 1000 flows with
various request/response sizes, with RFS/RPS disabled, and compared
performance between softirq vs kthread. Host has 56 hyper threads and
100Gbps nic.
req/resp QPS 50%tile 90%tile 99%tile 99.9%tile
softirq 1B/1B 2.19M 284us 987us 1.1ms 1.56ms
kthread 1B/1B 2.14M 295us 987us 1.0ms 1.17ms
softirq 5KB/5KB 1.31M 869us 1.06ms 1.28ms 2.38ms
kthread 5KB/5KB 1.32M 878us 1.06ms 1.26ms 1.66ms
softirq 1MB/1MB 10.78K 84ms 166ms 234ms 294ms
kthread 1MB/1MB 10.83K 82ms 173ms 262ms 320ms
I also ran one application benchmark where the user threads have more
work to do. We do see good amount of tail latency reductions with the
kthread model.
Paolo Abeni (2):
net: implement threaded-able napi poll loop support
net: add sysfs attribute to control napi threaded mode
Felix Fietkau (1):
net: extract napi poll functionality to __napi_poll()
Jakub Kicinski (1):
net: modify kthread handler to use __napi_poll()
Paolo Abeni (1):
net: process RPS/RFS work in kthread context
Wei Wang (1):
net: improve napi threaded config
include/linux/netdevice.h | 6 ++
net/core/dev.c | 146 +++++++++++++++++++++++++++++++++++---
net/core/net-sysfs.c | 99 ++++++++++++++++++++++++++
3 files changed, 242 insertions(+), 9 deletions(-)
--
2.28.0.618.gf4bc123cb7-goog
Powered by blists - more mailing lists