lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <717f47b1-d9c1-47d9-83ae-153ee11bb66d@intel.com>
Date: Fri, 12 Apr 2024 17:03:16 -0500
From: "Samudrala, Sridhar" <sridhar.samudrala@...el.com>
To: Parav Pandit <parav@...dia.com>, David Ahern <dsahern@...nel.org>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"stephen@...workplumber.org" <stephen@...workplumber.org>
CC: Jiri Pirko <jiri@...dia.com>, Shay Drori <shayd@...dia.com>, "Michal
 Swiatkowski" <michal.swiatkowski@...ux.intel.com>
Subject: Re: [PATCH v2 0/2] devlink: Support setting max_io_eqs



On 4/12/2024 12:22 AM, Parav Pandit wrote:
> 
>> From: Parav Pandit <parav@...dia.com>
>> Sent: Friday, April 12, 2024 9:02 AM
>>
>> Hi David, Sridhar,
>>
>>> From: David Ahern <dsahern@...nel.org>
>>> Sent: Friday, April 12, 2024 7:36 AM
>>>
>>> On 4/11/24 5:03 PM, Samudrala, Sridhar wrote:
>>>>
>>>>
>>>> On 4/10/2024 9:32 PM, Parav Pandit wrote:
>>>>> Hi Sridhar,
>>>>>
>>>>>> From: Samudrala, Sridhar <sridhar.samudrala@...el.com>
>>>>>> Sent: Thursday, April 11, 2024 4:53 AM
>>>>>>
>>>>>>
>>>>>> On 4/10/2024 6:58 AM, Parav Pandit wrote:
>>>>>>> Devices send event notifications for the IO queues, such as tx
>>>>>>> and rx queues, through event queues.
>>>>>>>
>>>>>>> Enable a privileged owner, such as a hypervisor PF, to set the
>>>>>>> number of IO event queues for the VF and SF during the
>>>>>>> provisioning
>>> stage.
>>>>>>
>>>>>> How do you provision tx/rx queues for VFs & SFs?
>>>>>> Don't you need similar mechanism to setup max tx/rx queues too?
>>>>>
>>>>> Currently we don’t. They are derived from the IO event queues.
>>>>> As you know, sometimes more txqs than IO event queues needed for
>>>>> XDP, timestamp, multiple TCs.
>>>>> If needed, probably additional knob for txq, rxq can be added to
>>>>> restrict device resources.
>>>>
>>>> Rather than deriving tx and rx queues from IO event queues, isn't it
>>>> more user friendly to do the other way. Let the host admin set the
>>>> max number of tx and rx queues allowed and the driver derive the
>>>> number of ioevent queues based on those values. This will be
>>>> consistent with what ethtool reports as pre-set maximum values for
>>>> the corresponding
>>> VF/SF.
>>>>
>>>
>>> I agree with this point: IO EQ seems to be a mlx5 thing (or maybe I
>>> have not reviewed enough of the other drivers).
>>
>> IO EQs are used by hns3, mana, mlx5, mlxsw, be2net. They might not yet have
>> the need to provision them.
>>
>>> Rx and Tx queues are already part of
>>> the ethtool API. This devlink feature is allowing resource limits to
>>> be configured, and a consistent API across tools would be better for users.
>>
>> IO Eqs of a function are utilized by the non netdev stack as well for a multi-
>> functionality function like rdma completion vectors.
>> Txq and rxq are yet another separate resource, so it is not mutually exclusive
>> with IO EQs.
>>
>> I can additionally add txq and rxq provisioning knob too if this is useful, yes?

Yes. We need knobs for txq and rxq too.
IO Eq looks like a completion queue. We don't need them for ice driver 
at this time, but for our idpf based control/switchdev driver we need a 
way to setup max number of txqueues, rxqueues, rxbuffer queues and tx 
completion queues.

>>
>> Sridhar,
>> I didn’t lately check other drivers how usable is it, will you also implement
>> the txq, rxq callbacks?
>> Please let me know I can start the work later next week for those additional
>> knobs.

Sure. Our subfunction support for ice is currently under review and we 
are defaulting to 1 rx/tx queue for now. These knobs would be required 
and useful when we enable more than 1 queue for each SF.

> 
> I also forgot to describe in above reply that some driver like mlx5 creates internal tx and rxqs not directly visible in channels, for xdp, timestamp, for traffic classes, dropping certain packets on rx, etc.
> So exact derivation of io queues is also hard there >
> Regardless to me, both knobs are useful, and driver will create min() resource based on both the device limits.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ