[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220812113259.531-1-magnus.karlsson@gmail.com>
Date: Fri, 12 Aug 2022 13:32:59 +0200
From: Magnus Karlsson <magnus.karlsson@...il.com>
To: magnus.karlsson@...el.com, bjorn@...nel.org, ast@...nel.org,
daniel@...earbox.net, netdev@...r.kernel.org,
maciej.fijalkowski@...el.com
Cc: jonathan.lemon@...il.com, bpf@...r.kernel.org,
Alasdair McWilliam <alasdair.mcwilliam@...look.com>,
Intrusion Shield Team <dnevil@...rusion.com>
Subject: [PATCH bpf] xsk: fix corrupted packets for XDP_SHARED_UMEM
From: Magnus Karlsson <magnus.karlsson@...el.com>
Fix an issue in XDP_SHARED_UMEM mode together with aligned mode were
packets are corrupted for the second and any further sockets bound to
the same umem. In other words, this does not affect the first socket
bound to the umem. The culprit for this bug is that the initialization
of the DMA addresses for the pre-populated xsk buffer pool entries was
not performed for any socket but the first one bound to the umem. Only
the linear array of DMA addresses was populated. Fix this by
populating the DMA addresses in the xsk buffer pool for every socket
bound to the same umem.
Fixes: 94033cd8e73b8 ("xsk: Optimize for aligned case")
Reported-by: Alasdair McWilliam <alasdair.mcwilliam@...look.com>
Reported-by: Intrusion Shield Team <dnevil@...rusion.com>
Tested-by: Alasdair McWilliam <alasdair.mcwilliam@...look.com>
Link: https://lore.kernel.org/xdp-newbies/6205E10C-292E-4995-9D10-409649354226@outlook.com/
Signed-off-by: Magnus Karlsson <magnus.karlsson@...el.com>
---
net/xdp/xsk_buff_pool.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index f70112176b7c..a71a8c6edf55 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -379,6 +379,16 @@ static void xp_check_dma_contiguity(struct xsk_dma_map *dma_map)
static int xp_init_dma_info(struct xsk_buff_pool *pool, struct xsk_dma_map *dma_map)
{
+ if (!pool->unaligned) {
+ u32 i;
+
+ for (i = 0; i < pool->heads_cnt; i++) {
+ struct xdp_buff_xsk *xskb = &pool->heads[i];
+
+ xp_init_xskb_dma(xskb, pool, dma_map->dma_pages, xskb->orig_addr);
+ }
+ }
+
pool->dma_pages = kvcalloc(dma_map->dma_pages_cnt, sizeof(*pool->dma_pages), GFP_KERNEL);
if (!pool->dma_pages)
return -ENOMEM;
@@ -428,12 +438,6 @@ int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
if (pool->unaligned)
xp_check_dma_contiguity(dma_map);
- else
- for (i = 0; i < pool->heads_cnt; i++) {
- struct xdp_buff_xsk *xskb = &pool->heads[i];
-
- xp_init_xskb_dma(xskb, pool, dma_map->dma_pages, xskb->orig_addr);
- }
err = xp_init_dma_info(pool, dma_map);
if (err) {
base-commit: 4e4588f1c4d2e67c993208f0550ef3fae33abce4
--
2.34.1
Powered by blists - more mailing lists