[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240119233037.537084-3-maciej.fijalkowski@intel.com>
Date: Sat, 20 Jan 2024 00:30:28 +0100
From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
To: bpf@...r.kernel.org,
ast@...nel.org,
daniel@...earbox.net,
andrii@...nel.org
Cc: netdev@...r.kernel.org,
magnus.karlsson@...el.com,
bjorn@...nel.org,
maciej.fijalkowski@...el.com,
echaudro@...hat.com,
lorenzo@...nel.org,
martin.lau@...ux.dev,
tirthendu.sarkar@...el.com,
john.fastabend@...il.com
Subject: [PATCH v4 bpf 02/11] xsk: make xsk_buff_pool responsible for clearing xdp_buff::flags
Currently, ZC drivers that support multi-buffer XDP, clear flag that
indicates whether particular xdp_buff contains fragments only on the
first processed fragment. Rest of the ZC XSK logic relies on that as
well, but we could end up with fragments that have XDP_FLAGS_HAS_FRAGS
set, which would confuse for example xsk_buff_free(), which might be
called when bpf_xdp_adjust_tail() removes buffer.
To fix this, let us clear the mentioned flag on xsk_buff_pool side at
allocation time.
Fixes: 1bbc04de607b ("ice: xsk: add RX multi-buffer support")
Fixes: 1c9ba9c14658 ("i40e: xsk: add RX multi-buffer support")
Fixes: 24ea50127ecf ("xsk: support mbuf on ZC RX")
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
---
drivers/net/ethernet/intel/i40e/i40e_xsk.c | 1 -
drivers/net/ethernet/intel/ice/ice_xsk.c | 1 -
net/xdp/xsk_buff_pool.c | 3 +++
3 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
index e99fa854d17f..fede0bb3e047 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
@@ -499,7 +499,6 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
xdp_res = i40e_run_xdp_zc(rx_ring, first, xdp_prog);
i40e_handle_xdp_result_zc(rx_ring, first, rx_desc, &rx_packets,
&rx_bytes, xdp_res, &failure);
- first->flags = 0;
next_to_clean = next_to_process;
if (failure)
break;
diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
index 5d1ae8e4058a..d9073a618ad6 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
@@ -895,7 +895,6 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
if (!first) {
first = xdp;
- xdp_buff_clear_frags_flag(first);
} else if (ice_add_xsk_frag(rx_ring, first, xdp, size)) {
break;
}
diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index 28711cc44ced..dc5659da6728 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -555,6 +555,7 @@ struct xdp_buff *xp_alloc(struct xsk_buff_pool *pool)
xskb->xdp.data = xskb->xdp.data_hard_start + XDP_PACKET_HEADROOM;
xskb->xdp.data_meta = xskb->xdp.data;
+ xskb->xdp.flags = 0;
if (pool->dma_need_sync) {
dma_sync_single_range_for_device(pool->dev, xskb->dma, 0,
@@ -601,6 +602,7 @@ static u32 xp_alloc_new_from_fq(struct xsk_buff_pool *pool, struct xdp_buff **xd
}
*xdp = &xskb->xdp;
+ xskb->xdp.flags = 0;
xdp++;
}
@@ -621,6 +623,7 @@ static u32 xp_alloc_reused(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u3
list_del_init(&xskb->free_list_node);
*xdp = &xskb->xdp;
+ xskb->xdp.flags = 0;
xdp++;
}
pool->free_list_cnt -= nb_entries;
--
2.34.1
Powered by blists - more mailing lists