lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20231020095952.11055-6-linyunsheng@huawei.com> Date: Fri, 20 Oct 2023 17:59:52 +0800 From: Yunsheng Lin <linyunsheng@...wei.com> To: <davem@...emloft.net>, <kuba@...nel.org>, <pabeni@...hat.com> CC: <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>, Yunsheng Lin <linyunsheng@...wei.com>, Lorenzo Bianconi <lorenzo@...nel.org>, Alexander Duyck <alexander.duyck@...il.com>, Liang Chen <liangchen.linux@...il.com>, Alexander Lobakin <aleksander.lobakin@...el.com>, Eric Dumazet <edumazet@...gle.com>, Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>, Jesper Dangaard Brouer <hawk@...nel.org>, John Fastabend <john.fastabend@...il.com>, <bpf@...r.kernel.org> Subject: [PATCH net-next v12 5/5] net: veth: use newly added page pool API for veth with xdp Use page_pool_alloc() API to allocate memory with least memory utilization and performance penalty. Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com> CC: Lorenzo Bianconi <lorenzo@...nel.org> CC: Alexander Duyck <alexander.duyck@...il.com> CC: Liang Chen <liangchen.linux@...il.com> CC: Alexander Lobakin <aleksander.lobakin@...el.com> --- drivers/net/veth.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 0deefd1573cf..9980517ed8b0 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -737,10 +737,11 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq, if (skb_shared(skb) || skb_head_is_locked(skb) || skb_shinfo(skb)->nr_frags || skb_headroom(skb) < XDP_PACKET_HEADROOM) { - u32 size, len, max_head_size, off; + u32 size, len, max_head_size, off, truesize, page_offset; struct sk_buff *nskb; struct page *page; int i, head_off; + void *va; /* We need a private copy of the skb and data buffers since * the ebpf program can modify it. We segment the original skb @@ -753,14 +754,17 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq, if (skb->len > PAGE_SIZE * MAX_SKB_FRAGS + max_head_size) goto drop; + size = min_t(u32, skb->len, max_head_size); + truesize = SKB_HEAD_ALIGN(size) + VETH_XDP_HEADROOM; + /* Allocate skb head */ - page = page_pool_dev_alloc_pages(rq->page_pool); - if (!page) + va = page_pool_dev_alloc_va(rq->page_pool, &truesize); + if (!va) goto drop; - nskb = napi_build_skb(page_address(page), PAGE_SIZE); + nskb = napi_build_skb(va, truesize); if (!nskb) { - page_pool_put_full_page(rq->page_pool, page, true); + page_pool_free_va(rq->page_pool, va, true); goto drop; } @@ -768,7 +772,6 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq, skb_copy_header(nskb, skb); skb_mark_for_recycle(nskb); - size = min_t(u32, skb->len, max_head_size); if (skb_copy_bits(skb, 0, nskb->data, size)) { consume_skb(nskb); goto drop; @@ -783,14 +786,18 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq, len = skb->len - off; for (i = 0; i < MAX_SKB_FRAGS && off < skb->len; i++) { - page = page_pool_dev_alloc_pages(rq->page_pool); + size = min_t(u32, len, PAGE_SIZE); + truesize = size; + + page = page_pool_dev_alloc(rq->page_pool, &page_offset, + &truesize); if (!page) { consume_skb(nskb); goto drop; } - size = min_t(u32, len, PAGE_SIZE); - skb_add_rx_frag(nskb, i, page, 0, size, PAGE_SIZE); + skb_add_rx_frag(nskb, i, page, page_offset, size, + truesize); if (skb_copy_bits(skb, off, page_address(page), size)) { consume_skb(nskb); -- 2.33.0
Powered by blists - more mailing lists