lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
 <PH0PR12MB5481F01E3B241D87AD20E947DC032@PH0PR12MB5481.namprd12.prod.outlook.com>
Date: Fri, 5 Apr 2024 16:34:59 +0000
From: Parav Pandit <parav@...dia.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "davem@...emloft.net"
	<davem@...emloft.net>, "edumazet@...gle.com" <edumazet@...gle.com>,
	"pabeni@...hat.com" <pabeni@...hat.com>, "corbet@....net" <corbet@....net>,
	"kalesh-anakkur.purayil@...adcom.com" <kalesh-anakkur.purayil@...adcom.com>,
	Saeed Mahameed <saeedm@...dia.com>, "leon@...nel.org" <leon@...nel.org>,
	"jiri@...nulli.us" <jiri@...nulli.us>, Shay Drori <shayd@...dia.com>, Dan
 Jurgens <danielj@...dia.com>, Dima Chumak <dchumak@...dia.com>,
	"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
	"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>, Jiri Pirko
	<jiri@...dia.com>
Subject: RE: [net-next v3 1/2] devlink: Support setting max_io_eqs



> From: Jakub Kicinski <kuba@...nel.org>
> Sent: Friday, April 5, 2024 7:44 PM
> 
> On Fri, 5 Apr 2024 03:13:36 +0000 Parav Pandit wrote:
> > Netdev qps (txq, rxq pair) channels created are typically upto num cpus by
> driver, provided it has enough IO event queues upto cpu count.
> >
> > Rdma qps are far more than netdev qps as multiple process uses them and
> they are per user space process resource.
> > Those applications use number of QPs based on number of cpus and
> number of event channels to deliver notifications to user space.
> 
> Some other drivers (e.g. intel) support multiple queues per core in netdev.
> For mlx5 I think AF_XDP may be a good example (or used to be) where there
> may be more than one queue?
>
Yes, there may be multiple netdev queues which can be connected to a eq.
For example, as you described, mlx5 xdp, also mlx5 creates a multiple txq per traffic class per channel which are linked to a single eq of the channel.
But still those txq are per channel AFAIK.
 
> So I think the question still stands even for netdev.
> We should document whether number of EQs contains the number of Rx/Tx
> queues.
> 
I believe number of txq, rxqs can be more than the number of EQs connecting to same EQ.
Netdev channels have more accurate linkage to EQs, compared to raw txq/rxqs.

> > Driver uses the IRQs dynamically upto the PCI's limit based on supported IO
> event queues.
> 
> Right but one IRQ <> one EQ? Typically / always?
Typically yes, one IRQ <> one EQ.
> SFs "share" the IRQs with PF IIRC, do they share EQs?
>
SFs do not share EQs. Yes, SFs have their own dedicated EQs.
You remember right, that they share IRQs.
 
> > > The next patch says "maximum IO event queues which are typically
> > > used to derive the maximum and default number of net device channels"
> > > It may not be obvious to non-mlx5 experts, I think it needs to be
> > > better documented.
> > I will expand the documentation in .../networking/devlink/devlink-port.rst.
> >
> > I will add below change to the v4 that has David's comments also
> addressed.
> > Is this ok for you?
> 
> Looks like a good start but I think a few more sentences describing the
> relation to other resources would be good.
>
I think EQs limited object that does not have more wider relation in the stack.
Relation to IRQ is probably a good addition to do.
Along with below changes, will add the reference to IRQ too in v4.
 
> > --- a/Documentation/networking/devlink/devlink-port.rst
> > +++ b/Documentation/networking/devlink/devlink-port.rst
> > @@ -304,6 +304,11 @@ When user sets maximum number of IO event
> queues
> > for a SF or  a VF, such function driver is limited to consume only
> > enforced  number of IO event queues.
> >
> > +IO event queues deliver events related to IO queues, including
> > +network device transmit and receive queues (txq and rxq) and RDMA
> Queue Pairs (QPs).
> > +For example, the number of netdevice channels and RDMA device
> > +completion vectors are derived from the function's IO event queues.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ