[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250307-lovely-smiling-honeybee-d15ecc@leitao>
Date: Fri, 7 Mar 2025 05:03:11 -0800
From: Breno Leitao <leitao@...ian.org>
To: eric.dumazet@...il.com
Cc: netdev@...r.kernel.org, horms@...ge.net.au
Subject: netpoll: netpoll zap_completion_queue() question
Hello Eric,
I am looking at netpoll code, specifically at zap_completion_queue(),
and I saw you tried to get rid of it a while ago with 15e83ed78864d0
("net: remove zap_completion_queue") but it needed to be reverted.
Unfortunately I didn't get the history behind the revert in the mailing
lists. Do you remember why it was reverted?
I understand that zap_completion_queue() is being called to potentially
free some space (by dropping skbs in the completion queue) when trying
at netpoll TX side when trying to find SKBs.
I am thinking about the patch below, but, I want to check with you since
you have some context I might be missing.
Thanks
breno
Author: Breno Leitao <leitao@...ian.org>
Date: Fri Mar 7 04:30:08 2025 -0800
netpoll: Only zap completion queue under memory pressure
Optimize the netpoll TX path by removing unnecessary calls to
zap_completion_queue() during normal operation. Previously, this
function was called unconditionally in the find_skb() path, which
unnecessarily slowed down TX processing when system memory was
sufficient.
The completion queue should only be cleared when there's actual
memory pressure, such as when:
1. An SKB was consumed from the pool, and we need to refill the SKB pool
(in refill_skbs_work_handler())
2. We can't allocate new SKBs during polling (and netpoll_poll_dev() is
called (which also calls zap_completion_queue())
This change improves netpoll TX performance in the common case while
maintaining the memory pressure handling capability when needed.
Signed-off-by: Breno Leitao <leitao@...ian.org>
diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index 8a0df2b274a88..83d6c960d2079 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -283,7 +283,6 @@ static struct sk_buff *find_skb(struct netpoll *np, int len, int reserve)
int count = 0;
struct sk_buff *skb;
- zap_completion_queue();
repeat:
skb = alloc_skb(len, GFP_ATOMIC);
@@ -628,6 +627,7 @@ static void refill_skbs_work_handler(struct work_struct *work)
struct netpoll *np =
container_of(work, struct netpoll, refill_wq);
+ zap_completion_queue();
refill_skbs(np);
}
PS: This patch works on top of https://lore.kernel.org/all/20250306114826.GX3666230@kernel.org/#r
Powered by blists - more mailing lists