[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240823173103.94978-3-jdamato@fastly.com>
Date: Fri, 23 Aug 2024 17:30:53 +0000
From: Joe Damato <jdamato@...tly.com>
To: netdev@...r.kernel.org
Cc: amritha.nambiar@...el.com,
sridhar.samudrala@...el.com,
sdf@...ichev.me,
peter@...eblog.net,
m2shafiei@...terloo.ca,
bjorn@...osinc.com,
hch@...radead.org,
willy@...radead.org,
willemdebruijn.kernel@...il.com,
skhawaja@...gle.com,
kuba@...nel.org,
Martin Karsten <mkarsten@...terloo.ca>,
Joe Damato <jdamato@...tly.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
Jiri Pirko <jiri@...nulli.us>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
linux-kernel@...r.kernel.org (open list)
Subject: [PATCH net-next 2/6] net: Suspend softirq when prefer_busy_poll is set
From: Martin Karsten <mkarsten@...terloo.ca>
When NAPI_F_PREFER_BUSY_POLL is set during busy_poll_stop and the
irq_suspend_timeout sysfs is nonzero, this timeout is used to defer
softirq scheduling, potentially longer than gro_flush_timeout. This can
be used to effectively suspend softirq processing during the time it
takes for an application to process data and return to the next
busy-poll.
The call to napi->poll in busy_poll_stop might lead to an invocation of
napi_complete_done, but the prefer-busy flag is still set at that time,
so the same logic is used to defer softirq scheduling for
irq_suspend_timeout.
Signed-off-by: Martin Karsten <mkarsten@...terloo.ca>
Co-developed-by: Joe Damato <jdamato@...tly.com>
Signed-off-by: Joe Damato <jdamato@...tly.com>
Tested-by: Joe Damato <jdamato@...tly.com>
Tested-by: Martin Karsten <mkarsten@...terloo.ca>
---
net/core/dev.c | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/net/core/dev.c b/net/core/dev.c
index 3bf325ec25a3..74060ba866d4 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6230,7 +6230,12 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
timeout = READ_ONCE(n->dev->gro_flush_timeout);
n->defer_hard_irqs_count = READ_ONCE(n->dev->napi_defer_hard_irqs);
}
- if (n->defer_hard_irqs_count > 0) {
+ if (napi_prefer_busy_poll(n)) {
+ timeout = READ_ONCE(n->dev->irq_suspend_timeout);
+ if (timeout)
+ ret = false;
+ }
+ if (ret && n->defer_hard_irqs_count > 0) {
n->defer_hard_irqs_count--;
timeout = READ_ONCE(n->dev->gro_flush_timeout);
if (timeout)
@@ -6366,9 +6371,13 @@ static void busy_poll_stop(struct napi_struct *napi, void *have_poll_lock,
bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
if (flags & NAPI_F_PREFER_BUSY_POLL) {
- napi->defer_hard_irqs_count = READ_ONCE(napi->dev->napi_defer_hard_irqs);
- timeout = READ_ONCE(napi->dev->gro_flush_timeout);
- if (napi->defer_hard_irqs_count && timeout) {
+ timeout = READ_ONCE(napi->dev->irq_suspend_timeout);
+ if (!timeout) {
+ napi->defer_hard_irqs_count = READ_ONCE(napi->dev->napi_defer_hard_irqs);
+ if (napi->defer_hard_irqs_count)
+ timeout = READ_ONCE(napi->dev->gro_flush_timeout);
+ }
+ if (timeout) {
hrtimer_start(&napi->timer, ns_to_ktime(timeout), HRTIMER_MODE_REL_PINNED);
skip_schedule = true;
}
--
2.25.1
Powered by blists - more mailing lists