lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250825135342.53110-10-kerneljasonxing@gmail.com>
Date: Mon, 25 Aug 2025 21:53:42 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: davem@...emloft.net,
	edumazet@...gle.com,
	kuba@...nel.org,
	pabeni@...hat.com,
	bjorn@...nel.org,
	magnus.karlsson@...el.com,
	maciej.fijalkowski@...el.com,
	jonathan.lemon@...il.com,
	sdf@...ichev.me,
	ast@...nel.org,
	daniel@...earbox.net,
	hawk@...nel.org,
	john.fastabend@...il.com,
	horms@...nel.org,
	andrew+netdev@...n.ch
Cc: bpf@...r.kernel.org,
	netdev@...r.kernel.org,
	Jason Xing <kernelxing@...cent.com>
Subject: [PATCH net-next v2 9/9] xsk: support dynamic xmit.more control for batch xmit

From: Jason Xing <kernelxing@...cent.com>

Only set xmit.more false for the last skb.

In theory, only making xmit.more false for the last packets to be
sent in each round can bring much benefit like avoid triggering too
many irqs.

Compared to the numbers for batch mode, a huge improvement (26%) can
be seen on i40e driver while a slight decrease (10%) on virtio_net.

Suggested-by: Jesper Dangaard Brouer <hawk@...nel.org>
Signed-off-by: Jason Xing <kernelxing@...cent.com>
---
Considering different implmentation in VM and host, I'm not sure if
we need to create another setsockopt to control this...
---
 net/core/dev.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index a5a6b9a199e9..9d28a3d0ce3b 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4751,7 +4751,9 @@ int xsk_direct_xmit_batch(struct sk_buff **skbs, struct net_device *dev,
 	local_bh_disable();
 	HARD_TX_LOCK(dev, txq, smp_processor_id());
 	for (*cur = start; *cur >= end; (*cur)--) {
-		ret = netdev_start_xmit(skbs[*cur], dev, txq, false);
+		bool more = !!(*cur != end);
+
+		ret = netdev_start_xmit(skbs[*cur], dev, txq, more);
 		if (ret != NETDEV_TX_OK)
 			break;
 	}
-- 
2.41.3


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ