[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230803140441.53596-3-huangjie.albert@bytedance.com>
Date: Thu, 3 Aug 2023 22:04:28 +0800
From: "huangjie.albert" <huangjie.albert@...edance.com>
To: davem@...emloft.net,
edumazet@...gle.com,
kuba@...nel.org,
pabeni@...hat.com
Cc: "huangjie.albert" <huangjie.albert@...edance.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
Björn Töpel <bjorn@...nel.org>,
Magnus Karlsson <magnus.karlsson@...el.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Jonathan Lemon <jonathan.lemon@...il.com>,
Pavel Begunkov <asml.silence@...il.com>,
Shmulik Ladkani <shmulik.ladkani@...il.com>,
Kees Cook <keescook@...omium.org>,
Richard Gobert <richardbgobert@...il.com>,
Yunsheng Lin <linyunsheng@...wei.com>,
netdev@...r.kernel.org (open list:NETWORKING DRIVERS),
linux-kernel@...r.kernel.org (open list),
bpf@...r.kernel.org (open list:XDP (eXpress Data Path))
Subject: [RFC Optimizing veth xsk performance 02/10] xsk: add dma_check_skip for skipping dma check
for the virtual net device such as veth, there is
no need to do dma check if we support zero copy.
add this flag after unaligned. beacause there are 4 bytes hole
pahole -V ./net/xdp/xsk_buff_pool.o:
-----------
...
/* --- cacheline 3 boundary (192 bytes) --- */
u32 chunk_size; /* 192 4 */
u32 frame_len; /* 196 4 */
u8 cached_need_wakeup; /* 200 1 */
bool uses_need_wakeup; /* 201 1 */
bool dma_need_sync; /* 202 1 */
bool unaligned; /* 203 1 */
/* XXX 4 bytes hole, try to pack */
void * addrs; /* 208 8 */
spinlock_t cq_lock; /* 216 4 */
...
-----------
Signed-off-by: huangjie.albert <huangjie.albert@...edance.com>
---
include/net/xsk_buff_pool.h | 1 +
net/xdp/xsk_buff_pool.c | 3 ++-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
index b0bdff26fc88..fe31097dc11b 100644
--- a/include/net/xsk_buff_pool.h
+++ b/include/net/xsk_buff_pool.h
@@ -81,6 +81,7 @@ struct xsk_buff_pool {
bool uses_need_wakeup;
bool dma_need_sync;
bool unaligned;
+ bool dma_check_skip;
void *addrs;
/* Mutual exclusion of the completion ring in the SKB mode. Two cases to protect:
* NAPI TX thread and sendmsg error paths in the SKB destructor callback and when
diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index b3f7b310811e..ed251b8e8773 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -85,6 +85,7 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
XDP_PACKET_HEADROOM;
pool->umem = umem;
pool->addrs = umem->addrs;
+ pool->dma_check_skip = false;
INIT_LIST_HEAD(&pool->free_list);
INIT_LIST_HEAD(&pool->xskb_list);
INIT_LIST_HEAD(&pool->xsk_tx_list);
@@ -202,7 +203,7 @@ int xp_assign_dev(struct xsk_buff_pool *pool,
if (err)
goto err_unreg_pool;
- if (!pool->dma_pages) {
+ if (!pool->dma_pages && !pool->dma_check_skip) {
WARN(1, "Driver did not DMA map zero-copy buffers");
err = -EINVAL;
goto err_unreg_xsk;
--
2.20.1
Powered by blists - more mailing lists