[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241029182703.2698171-1-csander@purestorage.com>
Date: Tue, 29 Oct 2024 12:26:58 -0600
From: Caleb Sander Mateos <csander@...estorage.com>
To: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>
Cc: Caleb Sander Mateos <csander@...estorage.com>,
netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH] net: skip RPS if packet is already on target CPU
If RPS is enabled, all packets with a CPU flow hint are enqueued to the
target CPU's input_pkt_queue and process_backlog() is scheduled on that
CPU to dequeue and process the packets. If ARFS has already steered the
packets to the correct CPU, this additional queuing is unnecessary and
the spinlocks involved incur significant CPU overhead.
In netif_receive_skb_internal() and netif_receive_skb_list_internal(),
check if the CPU flow hint get_rps_cpu() returns is the current CPU. If
so, bypass input_pkt_queue and immediately process the packet(s) on the
current CPU.
Signed-off-by: Caleb Sander Mateos <csander@...estorage.com>
---
net/core/dev.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/core/dev.c b/net/core/dev.c
index c682173a7642..714a47897c75 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5855,11 +5855,11 @@ static int netif_receive_skb_internal(struct sk_buff *skb)
#ifdef CONFIG_RPS
if (static_branch_unlikely(&rps_needed)) {
struct rps_dev_flow voidflow, *rflow = &voidflow;
int cpu = get_rps_cpu(skb->dev, skb, &rflow);
- if (cpu >= 0) {
+ if (cpu >= 0 && cpu != smp_processor_id()) {
ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
rcu_read_unlock();
return ret;
}
}
@@ -5884,15 +5884,17 @@ void netif_receive_skb_list_internal(struct list_head *head)
list_splice_init(&sublist, head);
rcu_read_lock();
#ifdef CONFIG_RPS
if (static_branch_unlikely(&rps_needed)) {
+ int curr_cpu = smp_processor_id();
+
list_for_each_entry_safe(skb, next, head, list) {
struct rps_dev_flow voidflow, *rflow = &voidflow;
int cpu = get_rps_cpu(skb->dev, skb, &rflow);
- if (cpu >= 0) {
+ if (cpu >= 0 && cpu != curr_cpu) {
/* Will be handled, remove from list */
skb_list_del_init(skb);
enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
}
}
--
2.45.2
Powered by blists - more mailing lists