lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <PH0PR12MB5481D604B1BFE72B76D62D37DC032@PH0PR12MB5481.namprd12.prod.outlook.com>
Date: Fri, 5 Apr 2024 03:13:36 +0000
From: Parav Pandit <parav@...dia.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "davem@...emloft.net"
	<davem@...emloft.net>, "edumazet@...gle.com" <edumazet@...gle.com>,
	"pabeni@...hat.com" <pabeni@...hat.com>, "corbet@....net" <corbet@....net>,
	"kalesh-anakkur.purayil@...adcom.com" <kalesh-anakkur.purayil@...adcom.com>,
	Saeed Mahameed <saeedm@...dia.com>, "leon@...nel.org" <leon@...nel.org>,
	"jiri@...nulli.us" <jiri@...nulli.us>, Shay Drori <shayd@...dia.com>, Dan
 Jurgens <danielj@...dia.com>, Dima Chumak <dchumak@...dia.com>,
	"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
	"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>, Jiri Pirko
	<jiri@...dia.com>
Subject: RE: [net-next v3 1/2] devlink: Support setting max_io_eqs

Hi Jakub,

> From: Jakub Kicinski <kuba@...nel.org>
> Sent: Friday, April 5, 2024 7:51 AM
> 
> On Wed, 3 Apr 2024 20:41:32 +0300 Parav Pandit wrote:
> > Many devices send event notifications for the IO queues, such as tx
> > and rx queues, through event queues.
> >
> > Enable a privileged owner, such as a hypervisor PF, to set the number
> > of IO event queues for the VF and SF during the provisioning stage.
> 
> What's the relationship between EQ and queue pairs and IRQs?

Netdev qps (txq, rxq pair) channels created are typically upto num cpus by driver, provided it has enough IO event queues upto cpu count.

Rdma qps are far more than netdev qps as multiple process uses them and they are per user space process resource.
Those applications use number of QPs based on number of cpus and number of event channels to deliver notifications to user space.

Driver uses the IRQs dynamically upto the PCI's limit based on supported IO event queues.

> The next patch says "maximum IO event queues which are typically used to
> derive the maximum and default number of net device channels"
> It may not be obvious to non-mlx5 experts, I think it needs to be better
> documented.
I will expand the documentation in .../networking/devlink/devlink-port.rst.

I will add below change to the v4 that has David's comments also addressed.
Is this ok for you?

--- a/Documentation/networking/devlink/devlink-port.rst
+++ b/Documentation/networking/devlink/devlink-port.rst
@@ -304,6 +304,11 @@ When user sets maximum number of IO event queues for a SF or
 a VF, such function driver is limited to consume only enforced
 number of IO event queues.

+IO event queues deliver events related to IO queues, including network
+device transmit and receive queues (txq and rxq) and RDMA Queue Pairs (QPs).
+For example, the number of netdevice channels and RDMA device completion
+vectors are derived from the function's IO event queues.
+

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ