lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e5a3edb9-1f6b-d7af-3f3a-4c80ee567c6b@intel.com>
Date: Thu, 25 May 2023 09:49:53 +0200
From: "Wilczynski, Michal" <michal.wilczynski@...el.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: Jiri Pirko <jiri@...nulli.us>, Tony Nguyen <anthony.l.nguyen@...el.com>,
	<davem@...emloft.net>, <pabeni@...hat.com>, <edumazet@...gle.com>,
	<netdev@...r.kernel.org>, <lukasz.czapnik@...el.com>,
	<przemyslaw.kitszel@...el.com>
Subject: Re: [PATCH net-next 0/5][pull request] ice: Support 5 layer Tx
 scheduler topology



On 5/24/2023 10:02 PM, Jakub Kicinski wrote:
> On Wed, 24 May 2023 18:59:20 +0200 Wilczynski, Michal wrote:
>>  [...]  
>>>> I wouldn't say it's a FW bug. Both approaches - 9-layer and 5-layer
>>>> have their own pros and cons, and in some cases 5-layer is
>>>> preferable, especially if the user desires the performance to be
>>>> better. But at the same time the user gives up the layers in a tree
>>>> that are actually useful in some cases (especially if using DCB, but
>>>> also recently added devlink-rate implementation).  
>>> I didn't notice mentions of DCB and devlink-rate in the series.
>>> The whole thing is really poorly explained.  
>> Sorry about that, I gave examples from the top of my head, since those are the
>> features that potentially could modify the scheduler tree, seemed obvious to me
>> at the time. Lowering number of layers in the scheduling tree increases performance,
>> but only allows you to create a much simpler scheduling tree. I agree that mentioning the
>> features that actually modify the scheduling tree could be helpful to the reviewer.
> Reviewer is one thing, but also the user. The documentation needs to be
> clear enough for the user to be able to confidently make a choice one
> way or the other. I'm not sure 5- vs 9-layer is meaningful to the user
> at all.

It is relevant especially if the number of VF's/queues is not a multiply of 8, as described
in the first commit of this series - that's the real-world user problem. Performance was
not consistent among queues if you had 9 queues for example.

But I was also trying to provide some background on why we don't want to make 5-layer
topology the default in the answers above.


>  In fact, the entire configuration would be better defined as
> a choice of features user wants to be available and the FW || driver
> makes the decision on how to implement that most efficiently.

User can change number of queues/VF's 'on the fly' , but change in topology
requires a reboot basically, since the contents of the NVM are changed.

So to accomplish that we would need to perform topology change after each
change to number of queues to adapt, and it's not feasible to reboot every time
user changes number of queues.

Additionally 5-layer topology doesn't disable any of the features mentioned
(i.e. DCB/devlink-rate) it just makes them work a bit differently, but they still
should work.

To summarize: I would say that this series address specific performance problem
user might have if their queue count is not a power of 8. I can't see how this can
be solved by a choice of features, as the decision regarding number of queues can
be made 'on-the-fly'.

Regards,
MichaƂ






Powered by blists - more mailing lists