[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <cover.1710525524.git.yan@cloudflare.com>
Date: Fri, 15 Mar 2024 12:55:00 -0700
From: Yan Zhai <yan@...udflare.com>
To: netdev@...r.kernel.org
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Jiri Pirko <jiri@...nulli.us>, Simon Horman <horms@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Coco Li <lixiaoyan@...gle.com>, Wei Wang <weiwan@...gle.com>,
Alexander Duyck <alexanderduyck@...com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
linux-kernel@...r.kernel.org, rcu@...r.kernel.org,
bpf@...r.kernel.org, kernel-team@...udflare.com,
Joel Fernandes <joel@...lfernandes.org>,
"Paul E. McKenney" <paulmck@...nel.org>,
Toke Høiland-Jørgensen <toke@...hat.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Steven Rostedt <rostedt@...dmis.org>, mark.rutland@....com,
Jesper Dangaard Brouer <hawk@...nel.org>
Subject: [PATCH v4 net 0/3] Report RCU QS for busy network kthreads
This changeset fixes a common problem for busy networking kthreads.
These threads, e.g. NAPI threads, typically will do:
* polling a batch of packets
* if there are more work, call cond_resched to allow scheduling
* continue to poll more packets when rx queue is not empty
We observed this being a problem in production, since it can block RCU
tasks from making progress under heavy load. Investigation indicates
that just calling cond_resched is insufficient for RCU tasks to reach
quiescent states. This at least affects NAPI threads, napi_busy_loop, and
also cpumap kthread for now.
By reporting RCU QSes in these kthreads periodically before
cond_resched, the blocked RCU waiters can correctly progress. Instead of
just reporting QS for RCU tasks, these code share the same concern as
noted in the commit d28139c4e967 ("rcu: Apply RCU-bh QSes to RCU-sched
and RCU-preempt when safe"). So report a consolidated QS for safety.
It is worth noting that, although this problem is reproducible in
napi_busy_loop, it only shows up when setting the polling interval to as
high as 2ms, which is far larger than recommended 50us-100us in the
documentation. So napi_busy_loop is left untouched.
V3: https://lore.kernel.org/lkml/20240314145459.7b3aedf1@kernel.org/t/
V2: https://lore.kernel.org/bpf/ZeFPz4D121TgvCje@debian.debian/
V1: https://lore.kernel.org/lkml/Zd4DXTyCf17lcTfq@debian.debian/#t
changes since v3:
* fixed kernel-doc errors
changes since v2:
* created a helper in rcu header to abstract the behavior
* fixed cpumap kthread in addition
changes since v1:
* disable preemption first as Paul McKenney suggested
Yan Zhai (3):
rcu: add a helper to report consolidated flavor QS
net: report RCU QS on threaded NAPI repolling
bpf: report RCU QS in cpumap kthread
include/linux/rcupdate.h | 24 ++++++++++++++++++++++++
kernel/bpf/cpumap.c | 3 +++
net/core/dev.c | 3 +++
3 files changed, 30 insertions(+)
--
2.30.2
Powered by blists - more mailing lists