[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 8 Aug 2023 02:26:35 +0000
From: Ratheesh Kannoth <rkannoth@...vell.com>
To: Jesper Dangaard Brouer <hawk@...nel.org>,
Jakub Kicinski <kuba@...nel.org>,
Alexander H Duyck <alexander.duyck@...il.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"pabeni@...hat.com" <pabeni@...hat.com>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Alexander Lobakin <aleksander.lobakin@...el.com>,
Yunsheng Lin <linyunsheng@...wei.com>
Subject: RE: [EXT] Re: [PATCH net-next] page_pool: Clamp ring size to 32K
> From: Jesper Dangaard Brouer <hawk@...nel.org>
> Sent: Tuesday, August 8, 2023 1:42 AM
> As a temporary solution, I'm actually fine with capping at 32k.
> Driver developer loose some feedback control, but perhaps that is okay, if
> we can agree that the net-core should control tuning this anyhow.
Capping will never let user know that memory is unnecessarily wasted as there is not much
Correlation between ring size and page pool size. I would prefer not setting pool->p.pool_size in
Octeontx2 (pool->p.pool_size = 0 ) driver and let page pool infra decide on that. Is this change acceptable ?
100G link burst traffic would be able to handle with 1024 page pool size, do we have any data with any other driver ?
-Ratheesh
Powered by blists - more mailing lists