[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <99890c72-eb61-e032-944a-6671d6494c23@huawei.com>
Date: Tue, 25 Apr 2023 19:19:12 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
CC: Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
Lorenzo Bianconi <lorenzo@...nel.org>,
<netdev@...r.kernel.org>, <bpf@...r.kernel.org>,
<davem@...emloft.net>, <edumazet@...gle.com>, <kuba@...nel.org>,
<pabeni@...hat.com>, <hawk@...nel.org>, <john.fastabend@...il.com>,
<ast@...nel.org>, <daniel@...earbox.net>
Subject: Re: [PATCH v2 net-next 1/2] net: veth: add page_pool for page
recycling
On 2023/4/24 21:10, Maciej Fijalkowski wrote:
>>> There was a discussion in the past to reduce XDP_PACKET_HEADROOM to 192B but
>>> this is not merged yet and it is not related to this series. We can address
>>> your comments in a follow-up patch when XDP_PACKET_HEADROOM series is merged.
>
> Intel drivers still work just fine at 192 headroom and split the page but
> it makes it problematic for BIG TCP where MAX_SKB_FRAGS from shinfo needs
I am not sure why we are not enabling skb_shinfo(skb)->frag_list to support
BIG TCP instead of increasing MAX_SKB_FRAGS, perhaps there was some disscution
about this in the past I am not aware of?
> to be increased. So it's the tailroom that becomes the bottleneck, not the
> headroom. I believe at some point we will convert our drivers to page_pool
> with full 4k page dedicated for a single frame.
Can we use header splitting to ensure there is enough tailroom for
napi_build_skb() or xdp_frame with shinfo?
Powered by blists - more mailing lists