[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250325044358.2675384-1-skhawaja@google.com>
Date: Tue, 25 Mar 2025 04:43:58 +0000
From: Samiullah Khawaja <skhawaja@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>, "David S . Miller " <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, almasrymina@...gle.com,
willemb@...gle.com, jdamato@...tly.com, mkarsten@...terloo.ca
Cc: netdev@...r.kernel.org, skhawaja@...gle.com
Subject: [PATCH net-next] xsk: Bring back busy polling support in XDP_COPY
Commit 5ef44b3cb43b ("xsk: Bring back busy polling support") fixed the
busy polling support in xsk for XDP_ZEROCOPY after it was broken in
commit 86e25f40aa1e ("net: napi: Add napi_config"). The busy polling
support with XDP_COPY remained broken since the napi_id setup in
xsk_rcv_check was removed.
Bring back the setup of napi_id for XDP_COPY so socket level SO_BUSYPOLL
can be used to poll the underlying napi.
Tested using AF_XDP support in virtio-net by running the xsk_rr AF_XDP
benchmarking tool shared here:
https://lore.kernel.org/all/20250320163523.3501305-1-skhawaja@google.com/T/
Enabled socket busy polling using following commands in qemu,
```
sudo ethtool -L eth0 combined 1
sudo ethtool -G eth0 rx 1024
echo 400 | sudo tee /proc/sys/net/core/busy_read
echo 100 | sudo tee /sys/class/net/eth0/napi_defer_hard_irqs
echo 15000 | sudo tee /sys/class/net/eth0/gro_flush_timeout
```
Fixes: 5ef44b3cb43b ("xsk: Bring back busy polling support")
Fixes: 86e25f40aa1e ("net: napi: Add napi_config")
Signed-off-by: Samiullah Khawaja <skhawaja@...gle.com>
---
net/xdp/xsk.c | 22 +++++++++++++++-------
1 file changed, 15 insertions(+), 7 deletions(-)
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index e5d104ce7b82..de8bf97b2cb9 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -310,6 +310,18 @@ static bool xsk_is_bound(struct xdp_sock *xs)
return false;
}
+static void __xsk_mark_napi_id_once(struct sock *sk, struct net_device *dev, u32 qid)
+{
+ struct netdev_rx_queue *rxq;
+
+ if (qid >= dev->real_num_rx_queues)
+ return;
+
+ rxq = __netif_get_rx_queue(dev, qid);
+ if (rxq->napi)
+ __sk_mark_napi_id_once(sk, rxq->napi->napi_id);
+}
+
static int xsk_rcv_check(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len)
{
if (!xsk_is_bound(xs))
@@ -323,6 +335,7 @@ static int xsk_rcv_check(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len)
return -ENOSPC;
}
+ __xsk_mark_napi_id_once(&xs->sk, xs->dev, xs->queue_id);
return 0;
}
@@ -1300,13 +1313,8 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
xs->queue_id = qid;
xp_add_xsk(xs->pool, xs);
- if (xs->zc && qid < dev->real_num_rx_queues) {
- struct netdev_rx_queue *rxq;
-
- rxq = __netif_get_rx_queue(dev, qid);
- if (rxq->napi)
- __sk_mark_napi_id_once(sk, rxq->napi->napi_id);
- }
+ if (xs->zc)
+ __xsk_mark_napi_id_once(sk, dev, qid);
out_unlock:
if (err) {
--
2.49.0.395.g12beb8f557-goog
Powered by blists - more mailing lists