lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48675853-2971-42a1-9596-73d1c4517085@intel.com>
Date: Mon, 19 Feb 2024 14:33:54 +0100
From: Przemek Kitszel <przemyslaw.kitszel@...el.com>
To: Jiri Pirko <jiri@...nulli.us>, Mateusz Polchlopek
	<mateusz.polchlopek@...el.com>
CC: <intel-wired-lan@...ts.osuosl.org>, <netdev@...r.kernel.org>,
	<horms@...nel.org>, Lukasz Czapnik <lukasz.czapnik@...el.com>
Subject: Re: [Intel-wired-lan] [PATCH iwl-next v4 4/5] ice: Add
 tx_scheduling_layers devlink param

On 2/19/24 13:37, Jiri Pirko wrote:
> Mon, Feb 19, 2024 at 11:05:57AM CET, mateusz.polchlopek@...el.com wrote:
>> From: Lukasz Czapnik <lukasz.czapnik@...el.com>
>>
>> It was observed that Tx performance was inconsistent across all queues
>> and/or VSIs and that it was directly connected to existing 9-layer
>> topology of the Tx scheduler.
>>
>> Introduce new private devlink param - tx_scheduling_layers. This parameter
>> gives user flexibility to choose the 5-layer transmit scheduler topology
>> which helps to smooth out the transmit performance.
>>
>> Allowed parameter values are 5 and 9.
>>
>> Example usage:
>>
>> Show:
>> devlink dev param show pci/0000:4b:00.0 name tx_scheduling_layers
>> pci/0000:4b:00.0:
>>   name tx_scheduling_layers type driver-specific
>>     values:
>>       cmode permanent value 9
>>
>> Set:
>> devlink dev param set pci/0000:4b:00.0 name tx_scheduling_layers value 5
>> cmode permanent
> 
> This is kind of proprietary param similar to number of which were shot

not sure if this is the same kind of param, but for sure proprietary one

> down for mlx5 in past. Jakub?

I'm not that familiar with the history/ies around mlx5, but this case is
somewhat different, at least for me:
we have a performance fix for the tree inside the FW/HW, while you
(IIRC) were about to introduce some nice and general abstraction layer,
which could be used by other HW vendors too, but instead it was mlx-only

> 
> Also, given this is apparently nvconfig configuration, there could be
> probably more suitable to use some provisioning tool. 

TBH, we will want to add some other NVM related params, but that does
not justify yet another tool to configure PF. (And then there would be
a big debate if FW update should be moved there too for consistency).

> This is related to the mlx5 misc driver.
> 
> Until be figure out the plan, this has my nack:
> 
> NAcked-by: Jiri Pirko <jiri@...dia.com>

IMO this is an easy case, but would like to hear from netdev maintainers



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ