lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240903180511.244041-1-maciej.fijalkowski@intel.com>
Date: Tue,  3 Sep 2024 20:05:11 +0200
From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
To: intel-wired-lan@...ts.osuosl.org
Cc: netdev@...r.kernel.org,
	anthony.l.nguyen@...el.com,
	magnus.karlsson@...el.com,
	bjorn@...nel.org,
	Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
	Dries De Winter <ddewinter@...amedia.com>
Subject: [PATCH iwl-net] ice: xsk: fix Rx allocation on non-coherent systems

In cases when synchronizing DMA operations is necessary,
xsk_buff_alloc_batch() returns a single buffer instead of the requested
count. Detect such situation when filling HW Rx ring in ZC driver and
use xsk_buff_alloc() in a loop manner so that ring gets the buffers to
be used.

Reported-and-tested-by: Dries De Winter <ddewinter@...amedia.com>
Fixes: db804cfc21e9 ("ice: Use the xsk batched rx allocation interface")
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
---
 drivers/net/ethernet/intel/ice/ice_xsk.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
index 240a7bec242b..889d0a5070d7 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
@@ -449,7 +449,24 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
 	u16 buffs;
 	int i;
 
+	if (unlikely(!xsk_buff_can_alloc(pool, count)))
+		return 0;
+
 	buffs = xsk_buff_alloc_batch(pool, xdp, count);
+	/* fill the remainder part that batch API did not provide for us,
+	 * this is usually the case for non-coherent systems that require DMA
+	 * syncs
+	 */
+	for (; buffs < count; buffs++) {
+		struct xdp_buff *tmp;
+
+		tmp = xsk_buff_alloc(pool);
+		if (unlikely(!tmp))
+			goto free;
+
+		xdp[buffs] = tmp;
+	}
+
 	for (i = 0; i < buffs; i++) {
 		dma = xsk_buff_xdp_get_dma(*xdp);
 		rx_desc->read.pkt_addr = cpu_to_le64(dma);
@@ -465,6 +482,13 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
 	}
 
 	return buffs;
+
+free:
+	for (i = 0; i < buffs; i++) {
+		xsk_buff_free(*xdp);
+		xdp++;
+	}
+	return 0;
 }
 
 /**
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ