lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 12 Apr 2021 14:50:43 +0000
From:   "Saleem, Shiraz" <shiraz.saleem@...el.com>
To:     Jason Gunthorpe <jgg@...dia.com>
CC:     "dledford@...hat.com" <dledford@...hat.com>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "Ertman, David M" <david.m.ertman@...el.com>,
        "Nguyen, Anthony L" <anthony.l.nguyen@...el.com>,
        "Williams, Dan J" <dan.j.williams@...el.com>,
        "Hefty, Sean" <sean.hefty@...el.com>,
        "Lacombe, John S" <john.s.lacombe@...el.com>
Subject: RE: [PATCH v4 01/23] iidc: Introduce iidc.h

> Subject: Re: [PATCH v4 01/23] iidc: Introduce iidc.h
> 
> On Wed, Apr 07, 2021 at 08:58:49PM +0000, Saleem, Shiraz wrote:
> > > Subject: Re: [PATCH v4 01/23] iidc: Introduce iidc.h
> > >
> > > On Tue, Apr 06, 2021 at 04:01:03PM -0500, Shiraz Saleem wrote:
> > >
> > > > +/* Following APIs are implemented by core PCI driver */ struct
> > > > +iidc_core_ops {
> > > > +	/* APIs to allocate resources such as VEB, VSI, Doorbell queues,
> > > > +	 * completion queues, Tx/Rx queues, etc...
> > > > +	 */
> > > > +	int (*alloc_res)(struct iidc_core_dev_info *cdev_info,
> > > > +			 struct iidc_res *res,
> > > > +			 int partial_acceptable);
> > > > +	int (*free_res)(struct iidc_core_dev_info *cdev_info,
> > > > +			struct iidc_res *res);
> > > > +
> > > > +	int (*request_reset)(struct iidc_core_dev_info *cdev_info,
> > > > +			     enum iidc_reset_type reset_type);
> > > > +
> > > > +	int (*update_vport_filter)(struct iidc_core_dev_info *cdev_info,
> > > > +				   u16 vport_id, bool enable);
> > > > +	int (*vc_send)(struct iidc_core_dev_info *cdev_info, u32 vf_id, u8 *msg,
> > > > +		       u16 len);
> > > > +};
> > >
> > > What is this? There is only one implementation:
> > >
> > > static const struct iidc_core_ops ops = {
> > > 	.alloc_res			= ice_cdev_info_alloc_res,
> > > 	.free_res			= ice_cdev_info_free_res,
> > > 	.request_reset			= ice_cdev_info_request_reset,
> > > 	.update_vport_filter		= ice_cdev_info_update_vsi_filter,
> > > 	.vc_send			= ice_cdev_info_vc_send,
> > > };
> > >
> > > So export and call the functions directly.
> >
> > No. Then we end up requiring ice to be loaded even when just want to
> > use irdma with x722 [whose ethernet driver is "i40e"].
> 
> So what? What does it matter to load a few extra kb of modules?

Because it is an unnecessary thing to force a user to build/load drivers for
which they don't have the HW for? The problem gets compounded if we have to
do it for all future HW Intel PCI drivers, i.e. depends on ICE && ....

IIDC is Intel's converged and generic RDMA <--> PCI driver channel interface; which
we intend to use moving forward. And these .ops callbacks are part of this interface which will
have different implementations by each HW generation PCI core driver. It is extensible
with new ops added to the table for new HW and where implementations of the certain ops on some
HW will be NULL.

There is a near-term Intel ethernet VF driver which will use IIDC to provide RDMA in the VF,
and implement some of these .ops callbacks. There is also intent to move i40e to IIDC. 

And yes, it allows to load a unified irdma driver without having all the mulit-gen PCI core drivers to be
built/loaded as a pre-requisite which is solving a pain-point to the user and does not unnecessarily
add a memory foot-print.

In the past, with i40e <-> i40iw, I acknowledge such a dependency was decoupled
for the wrong reasons [1] and understand where your frustration is coming from. But in
a unified irdma driver model connecting to multiple PCI gen drivers, I do think it serves
a reason. This has also been voiced over the years in some of our discussions [2] leading to
the auxiliary bus and been part of our submissions from the get go. In fact, use of such domain
specific .ops from the parent device is an assumption baked into the design when the auxiliary bus
was conceived and in the documentation [3] (See Example Usage).

[1] https://lore.kernel.org/linux-rdma/20180522205612.GD7502@mellanox.com/
[2] https://lore.kernel.org/linux-rdma/2B0E3F215D1AB84DA946C8BEE234CCC97B2E1D29@ORSMSX101.amr.corp.intel.com/
[3] https://www.kernel.org/doc/html/latest/driver-api/auxiliary_bus.html

Shiraz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ