[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8732499b-df8c-0ee0-bf0e-815736cf4de2@intel.com>
Date: Fri, 4 Aug 2023 16:43:51 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Ratheesh Kannoth <rkannoth@...vell.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Sunil Kovvuri Goutham" <sgoutham@...vell.com>,
Geethasowjanya Akula <gakula@...vell.com>,
Subbaraya Sundeep Bhatta <sbhatta@...vell.com>,
Hariprasad Kelam <hkelam@...vell.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"kuba@...nel.org" <kuba@...nel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>
Subject: Re: [EXT] Re: [PATCH net] octeontx2-pf: Set maximum queue size to 16K
From: Ratheesh Kannoth <rkannoth@...vell.com>
Date: Fri, 4 Aug 2023 02:25:55 +0000
>> From: Alexander Lobakin <aleksander.lobakin@...el.com>
>> Sent: Thursday, August 3, 2023 8:37 PM
>> To: Ratheesh Kannoth <rkannoth@...vell.com>
>> Subject: [EXT] Re: [PATCH net] octeontx2-pf: Set maximum queue size to 16K
>
>
>>> These recycling will impact on performance, right ? else, why didn't page
>> pool made this size as constant.
>>
>> Page Pool doesn't need huge ptr_ring sizes to successfully recycle pages.
>> Especially given that the recent PP optimizations made locking recycling
>> happen much more rarely.
> Got it. Thanks.
>
>> Re "size as constant" -- because lots of NICs don't need more than 256 or 512
>> descriptors and it would be only a waste to create page_pools with huge
>> ptr_rings for them. Queue sizes bigger than 1024 (ok, maybe
>> 2048) is the moment when the linear scale stops working. That's why I
>> believe that going out of [64, 2048] for page_pools doesn't make much
>> sense.
> So, will clamp to 2048 in page_pool_init() ? But it looks odd to me, as
> User requests > 2048, but will never be aware that it is clamped to 2048.
Why should he be aware of that? :D
But seriously, I can't just say: "hey, I promise you that your driver
will work best when PP size is clamped to 2048, just blindly follow",
it's more of a preference right now. Because...
> Better do this clamping in Driver and print a warning message ?
...because you just need to test your driver with different PP sizes and
decide yourself which upper cap to set. If it works the same when queues
are 16k and PPs are 2k versus 16k + 16k -- fine, you can stop on that.
If 16k + 16k or 16 + 8 or whatever works better -- stop on that. No hard
reqs.
Just don't cap maximum queue length due to PP sanity check, it doesn't
make sense.
>
> -Ratheesh
Thanks,
Olek
Powered by blists - more mailing lists