lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 5 Jul 2019 16:33:07 +0000
From:   "Saleem, Shiraz" <shiraz.saleem@...el.com>
To:     Greg KH <gregkh@...uxfoundation.org>,
        Jason Gunthorpe <jgg@...lanox.com>
CC:     "Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "dledford@...hat.com" <dledford@...hat.com>,
        "Nguyen, Anthony L" <anthony.l.nguyen@...el.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "nhorman@...hat.com" <nhorman@...hat.com>,
        "sassmann@...hat.com" <sassmann@...hat.com>,
        "poswald@...e.com" <poswald@...e.com>,
        "Ismail, Mustafa" <mustafa.ismail@...el.com>,
        "Ertman, David M" <david.m.ertman@...el.com>,
        "Bowers, AndrewX" <andrewx.bowers@...el.com>
Subject: RE: [net-next 1/3] ice: Initialize and register platform device to
 provide RDMA

> Subject: Re: [net-next 1/3] ice: Initialize and register platform device to provide
> RDMA
> 
> On Thu, Jul 04, 2019 at 12:48:29PM +0000, Jason Gunthorpe wrote:
> > On Thu, Jul 04, 2019 at 02:42:47PM +0200, Greg KH wrote:
> > > On Thu, Jul 04, 2019 at 12:37:33PM +0000, Jason Gunthorpe wrote:
> > > > On Thu, Jul 04, 2019 at 02:29:50PM +0200, Greg KH wrote:
> > > > > On Thu, Jul 04, 2019 at 12:16:41PM +0000, Jason Gunthorpe wrote:
> > > > > > On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> > > > > > > From: Tony Nguyen <anthony.l.nguyen@...el.com>
> > > > > > >
> > > > > > > The RDMA block does not advertise on the PCI bus or any other bus.
> > > > > > > Thus the ice driver needs to provide access to the RDMA
> > > > > > > hardware block via a virtual bus; utilize the platform bus to provide this
> access.
> > > > > > >
> > > > > > > This patch initializes the driver to support RDMA as well as
> > > > > > > creates and registers a platform device for the RDMA driver
> > > > > > > to register to. At this point the driver is fully
> > > > > > > initialized to register a platform driver, however, can not
> > > > > > > yet register as the ops have not been implemented.
> > > > > >
> > > > > > I think you need Greg's ack on all this driver stuff -
> > > > > > particularly that a platform_device is OK.
> > > > >
> > > > > A platform_device is almost NEVER ok.
> > > > >
> > > > > Don't abuse it, make a real device on a real bus.  If you don't
> > > > > have a real bus and just need to create a device to hang other
> > > > > things off of, then use the virtual one, that's what it is there for.
> > > >
> > > > Ideally I'd like to see all the RDMA drivers that connect to
> > > > ethernet drivers use some similar scheme.
> > >
> > > Why?  They should be attached to a "real" device, why make any up?
> >
> > ? A "real" device, like struct pci_device, can only bind to one
> > driver. How can we bind it concurrently to net, rdma, scsi, etc?
> 
> MFD was designed for this very problem.
> 
> > > > This is for a PCI device that plugs into multiple subsystems in
> > > > the kernel, ie it has net driver functionality, rdma
> > > > functionality, some even have SCSI functionality
> > >
> > > Sounds like a MFD device, why aren't you using that functionality
> > > instead?
> >
> > This was also my advice, but in another email Jeff says:
> >
> >   MFD architecture was also considered, and we selected the simpler
> >   platform model. Supporting a MFD architecture would require an
> >   additional MFD core driver, individual platform netdev, RDMA function
> >   drivers, and stripping a large portion of the netdev drivers into
> >   MFD core. The sub-devices registered by MFD core for function
> >   drivers are indeed platform devices.
> 
> So, "mfd is too hard, let's abuse a platform device" is ok?
> 
> People have been wanting to do MFD drivers for PCI devices for a long time, it's
> about time someone actually did the work for it, I bet it will not be all that complex
> if tiny embedded drivers can do it :)
> 
Hi Greg - Thanks for your feedback!

We currently have 2 PCI function netdev drivers in the kernel (i40e & ice) that support devices (x722 & e810)
which are RDMA capable. Our objective is to add a single unified RDMA driver
(as this a subsystem specific requirement) which needs to access HW resources from the
netdev PF drivers. Attaching platform devices from the netdev drivers to the platform bus
and having a single RDMA platform driver bind to them and access these resources seemed
like a simple approach to realize our objective. But seems like attaching platform devices is
wrong. I would like to understand why. 

Are platform sub devices only to be added from an MFD core driver? I am also wondering if MFD arch.
would allow for realizing a single RDMA driver and whether we need an MFD core driver for
each device, x722 & e810 or whether it can be a single driver.

Shiraz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ