[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b7253d36-26cc-e5a3-e34a-d28d6fd8fde0@intel.com>
Date: Thu, 15 Jun 2023 15:59:39 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Jakub Kicinski <kuba@...nel.org>,
Yunsheng Lin <linyunsheng@...wei.com>
CC: <davem@...emloft.net>, <pabeni@...hat.com>,
<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Alexander Duyck <alexander.duyck@...il.com>,
Yisen Zhuang <yisen.zhuang@...wei.com>,
Salil Mehta <salil.mehta@...wei.com>,
Eric Dumazet <edumazet@...gle.com>,
Sunil Goutham <sgoutham@...vell.com>,
Geetha sowjanya <gakula@...vell.com>,
Subbaraya Sundeep <sbhatta@...vell.com>,
hariprasad <hkelam@...vell.com>,
Saeed Mahameed <saeedm@...dia.com>,
Leon Romanovsky <leon@...nel.org>,
Felix Fietkau <nbd@....name>,
Ryder Lee <ryder.lee@...iatek.com>,
Shayne Chen <shayne.chen@...iatek.com>,
Sean Wang <sean.wang@...iatek.com>,
Kalle Valo <kvalo@...nel.org>,
Matthias Brugger <matthias.bgg@...il.com>,
David Christensen <drc@...ux.vnet.ibm.com>,
AngeloGioacchino Del Regno
<angelogioacchino.delregno@...labora.com>,
"Jesper Dangaard Brouer" <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
<linux-rdma@...r.kernel.org>, <linux-wireless@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-mediatek@...ts.infradead.org>
Subject: Re: [PATCH net-next v4 4/5] page_pool: remove PP_FLAG_PAGE_FRAG flag
From: Jakub Kicinski <kuba@...nel.org>
Date: Wed, 14 Jun 2023 10:19:54 -0700
> On Mon, 12 Jun 2023 21:02:55 +0800 Yunsheng Lin wrote:
>> struct page_pool_params pp_params = {
>> - .flags = PP_FLAG_DMA_MAP | PP_FLAG_PAGE_FRAG |
>> - PP_FLAG_DMA_SYNC_DEV,
>> + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
>> .order = hns3_page_order(ring),
>
> Does hns3_page_order() set a good example for the users?
>
> static inline unsigned int hns3_page_order(struct hns3_enet_ring *ring)
> {
> #if (PAGE_SIZE < 8192)
> if (ring->buf_size > (PAGE_SIZE / 2))
> return 1;
> #endif
> return 0;
> }
Oh lol, just what Intel drivers do. They don't have a pool to keep some
bunch of pages (they can recycle a page only within its buffer), so in
order to still recycle them, they allocate order-1 pages to be able to
flip the halves >_<
>
> Why allocate order 1 pages for buffers which would fit in a single page?
> I feel like this soft of heuristic should be built into the API itself.
Offtop:
I tested this series with IAVF: very little perf regression* (almost
stddev) comparing to just 1-page-per-frame Page Pool series, but 21 Mb
less RAM taken comparing to both "old" PP series and baseline, nice :D
(+Cc David Christensen, he'll be glad to hear we're stopping eating 64Kb
pages)
* this might be caused by that in the previous version I was hardcoding
truesize, but now it depends on what page_pool_alloc() returns. Same for
Rx offset: it was always 0 previously, as every frame was placed at the
start of page, now depends on how PP places** it.
With MTU of 1500 and no XDP, two frames fit into one 4k page. With XDP
on (increased headroom) or increased MTU, PP starts effectively do
1-frame-per-page with literally no changes in performance (increased RAM
usage obviously -- I mean, it gets restored to the baseline numbers).
** BTW, instead of 2048 + 2048, I'm getting 1920 + 2176. Maybe the stack
would be happier to see more consistent truesize for cache purposes.
I'll try to play with it.
Thanks,
Olek
Powered by blists - more mailing lists