[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3c78f045-aa8e-22a5-4b38-ab271122a79e@huawei.com>
Date: Mon, 24 Apr 2023 10:29:58 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
CC: Lorenzo Bianconi <lorenzo@...nel.org>, <netdev@...r.kernel.org>,
<bpf@...r.kernel.org>, <davem@...emloft.net>,
<edumazet@...gle.com>, <kuba@...nel.org>, <pabeni@...hat.com>,
<hawk@...nel.org>, <john.fastabend@...il.com>, <ast@...nel.org>,
<daniel@...earbox.net>
Subject: Re: [PATCH v2 net-next 1/2] net: veth: add page_pool for page
recycling
On 2023/4/23 22:20, Lorenzo Bianconi wrote:
>> On 2023/4/23 2:54, Lorenzo Bianconi wrote:
>>> struct veth_priv {
>>> @@ -727,17 +729,20 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
>>> goto drop;
>>>
>>> /* Allocate skb head */
>>> - page = alloc_page(GFP_ATOMIC | __GFP_NOWARN);
>>> + page = page_pool_dev_alloc_pages(rq->page_pool);
>>> if (!page)
>>> goto drop;
>>>
>>> nskb = build_skb(page_address(page), PAGE_SIZE);
>>
>> If page pool is used with PP_FLAG_PAGE_FRAG, maybe there is some additional
>> improvement for the MTU 1500B case, it seem a 4K page is able to hold two skb.
>> And we can reduce the memory usage too, which is a significant saving if page
>> size is 64K.
>
> please correct if I am wrong but I think the 1500B MTU case does not fit in the
> half-page buffer size since we need to take into account VETH_XDP_HEADROOM.
> In particular:
>
> - VETH_BUF_SIZE = 2048
> - VETH_XDP_HEADROOM = 256 + 2 = 258
On some arch the NET_IP_ALIGN is zero.
I suppose XDP_PACKET_HEADROOM are for xdp_frame and data_meta, it seems
xdp_frame is only 40 bytes for 64 bit arch and max size of metalen is 32
as xdp_metalen_invalid() suggest, is there any other reason why we need
256 bytes here?
> - max_headsize = SKB_WITH_OVERHEAD(VETH_BUF_SIZE - VETH_XDP_HEADROOM) = 1470
>
> Even in this case we will need the consume a full page. In fact, performances
> are a little bit worse:
>
> MTU 1500: tcp throughput ~ 8.3Gbps
>
> Do you agree or am I missing something?
>
> Regards,
> Lorenzo
Powered by blists - more mailing lists