[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231013064827.61135-7-linyunsheng@huawei.com>
Date: Fri, 13 Oct 2023 14:48:26 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: <davem@...emloft.net>, <kuba@...nel.org>, <pabeni@...hat.com>
CC: <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>, Yunsheng Lin
<linyunsheng@...wei.com>, Lorenzo Bianconi <lorenzo@...nel.org>, Alexander
Duyck <alexander.duyck@...il.com>, Liang Chen <liangchen.linux@...il.com>,
Alexander Lobakin <aleksander.lobakin@...el.com>, Eric Dumazet
<edumazet@...gle.com>, Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann
<daniel@...earbox.net>, Jesper Dangaard Brouer <hawk@...nel.org>, John
Fastabend <john.fastabend@...il.com>, <bpf@...r.kernel.org>
Subject: [PATCH net-next v11 6/6] net: veth: use newly added page pool API for veth with xdp
Use page_pool[_cache]_alloc() API to allocate memory with
least memory utilization and performance penalty.
Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com>
CC: Lorenzo Bianconi <lorenzo@...nel.org>
CC: Alexander Duyck <alexander.duyck@...il.com>
CC: Liang Chen <liangchen.linux@...il.com>
CC: Alexander Lobakin <aleksander.lobakin@...el.com>
---
drivers/net/veth.c | 25 ++++++++++++++++---------
1 file changed, 16 insertions(+), 9 deletions(-)
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 0deefd1573cf..470791b0b533 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -737,10 +737,11 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
if (skb_shared(skb) || skb_head_is_locked(skb) ||
skb_shinfo(skb)->nr_frags ||
skb_headroom(skb) < XDP_PACKET_HEADROOM) {
- u32 size, len, max_head_size, off;
+ u32 size, len, max_head_size, off, truesize, page_offset;
struct sk_buff *nskb;
struct page *page;
int i, head_off;
+ void *data;
/* We need a private copy of the skb and data buffers since
* the ebpf program can modify it. We segment the original skb
@@ -753,14 +754,17 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
if (skb->len > PAGE_SIZE * MAX_SKB_FRAGS + max_head_size)
goto drop;
+ size = min_t(u32, skb->len, max_head_size);
+ truesize = SKB_HEAD_ALIGN(size) + VETH_XDP_HEADROOM;
+
/* Allocate skb head */
- page = page_pool_dev_alloc_pages(rq->page_pool);
- if (!page)
+ data = page_pool_dev_cache_alloc(rq->page_pool, &truesize);
+ if (!data)
goto drop;
- nskb = napi_build_skb(page_address(page), PAGE_SIZE);
+ nskb = napi_build_skb(data, truesize);
if (!nskb) {
- page_pool_put_full_page(rq->page_pool, page, true);
+ page_pool_cache_free(rq->page_pool, data, true);
goto drop;
}
@@ -768,7 +772,6 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
skb_copy_header(nskb, skb);
skb_mark_for_recycle(nskb);
- size = min_t(u32, skb->len, max_head_size);
if (skb_copy_bits(skb, 0, nskb->data, size)) {
consume_skb(nskb);
goto drop;
@@ -783,14 +786,18 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
len = skb->len - off;
for (i = 0; i < MAX_SKB_FRAGS && off < skb->len; i++) {
- page = page_pool_dev_alloc_pages(rq->page_pool);
+ size = min_t(u32, len, PAGE_SIZE);
+ truesize = size;
+
+ page = page_pool_dev_alloc(rq->page_pool, &page_offset,
+ &truesize);
if (!page) {
consume_skb(nskb);
goto drop;
}
- size = min_t(u32, len, PAGE_SIZE);
- skb_add_rx_frag(nskb, i, page, 0, size, PAGE_SIZE);
+ skb_add_rx_frag(nskb, i, page, page_offset, size,
+ truesize);
if (skb_copy_bits(skb, off, page_address(page),
size)) {
consume_skb(nskb);
--
2.33.0
Powered by blists - more mailing lists