[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
<MWHPR1801MB19182D7ADBCF542FCC1C4DDDD30CA@MWHPR1801MB1918.namprd18.prod.outlook.com>
Date: Mon, 7 Aug 2023 02:51:24 +0000
From: Ratheesh Kannoth <rkannoth@...vell.com>
To: Jakub Kicinski <kuba@...nel.org>,
Alexander Lobakin
<aleksander.lobakin@...el.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Sunil Kovvuri
Goutham <sgoutham@...vell.com>,
Geethasowjanya Akula <gakula@...vell.com>,
Subbaraya Sundeep Bhatta <sbhatta@...vell.com>,
Hariprasad Kelam
<hkelam@...vell.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"pabeni@...hat.com"
<pabeni@...hat.com>
Subject: RE: [EXT] Re: [PATCH net] octeontx2-pf: Set maximum queue size to 16K
> From: Jakub Kicinski <kuba@...nel.org>
> Sent: Saturday, August 5, 2023 2:05 AM
> To: Alexander Lobakin <aleksander.lobakin@...el.com>
> Subject: Re: [EXT] Re: [PATCH net] octeontx2-pf: Set maximum queue size to
> 16K
>
> IDK if I agree with you here :S Tuning this in the driver relies on the
> assumption that the HW / driver is the thing that matters.
> I'd think that the workload, platform (CPU) and config (e.g. is IOMMU
> enabled?) will matter at least as much. While driver developers will end up
> tuning to whatever servers they have, random single config and most likely..
> iperf.
>
> IMO it's much better to re-purpose "pool_size" and treat it as the ring size,
> because that's what most drivers end up putting there.
> Defer tuning of the effective ring size to the core and user input (via the "it
> will be added any minute now" netlink API for configuring page pools)...
>
> So capping the recycle ring to 32k instead of returning the error seems like an
> okay solution for now.
Either of the solutions looks Okay to me. Let me push a patch with Jacub's proposal for now.
-Ratheesh.
Powered by blists - more mailing lists