lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sat, 6 Jul 2019 16:03:38 +0000
From:   "Saleem, Shiraz" <shiraz.saleem@...el.com>
To:     Greg KH <gregkh@...uxfoundation.org>
CC:     Jason Gunthorpe <jgg@...lanox.com>,
        "Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "dledford@...hat.com" <dledford@...hat.com>,
        "Nguyen, Anthony L" <anthony.l.nguyen@...el.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "nhorman@...hat.com" <nhorman@...hat.com>,
        "sassmann@...hat.com" <sassmann@...hat.com>,
        "poswald@...e.com" <poswald@...e.com>,
        "Ismail, Mustafa" <mustafa.ismail@...el.com>,
        "Ertman, David M" <david.m.ertman@...el.com>,
        "Bowers, AndrewX" <andrewx.bowers@...el.com>
Subject: RE: [net-next 1/3] ice: Initialize and register platform device to
 provide RDMA

> Subject: Re: [net-next 1/3] ice: Initialize and register platform device to provide
> RDMA
> 
> On Fri, Jul 05, 2019 at 04:33:07PM +0000, Saleem, Shiraz wrote:
> > > Subject: Re: [net-next 1/3] ice: Initialize and register platform
> > > device to provide RDMA
> > >
> > > On Thu, Jul 04, 2019 at 12:48:29PM +0000, Jason Gunthorpe wrote:
> > > > On Thu, Jul 04, 2019 at 02:42:47PM +0200, Greg KH wrote:
> > > > > On Thu, Jul 04, 2019 at 12:37:33PM +0000, Jason Gunthorpe wrote:
> > > > > > On Thu, Jul 04, 2019 at 02:29:50PM +0200, Greg KH wrote:
> > > > > > > On Thu, Jul 04, 2019 at 12:16:41PM +0000, Jason Gunthorpe wrote:
> > > > > > > > On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> > > > > > > > > From: Tony Nguyen <anthony.l.nguyen@...el.com>
> > > > > > > > >
> > > > > > > > > The RDMA block does not advertise on the PCI bus or any other
> bus.
> > > > > > > > > Thus the ice driver needs to provide access to the RDMA
> > > > > > > > > hardware block via a virtual bus; utilize the platform
> > > > > > > > > bus to provide this
> > > access.
> > > > > > > > >
> > > > > > > > > This patch initializes the driver to support RDMA as
> > > > > > > > > well as creates and registers a platform device for the
> > > > > > > > > RDMA driver to register to. At this point the driver is
> > > > > > > > > fully initialized to register a platform driver,
> > > > > > > > > however, can not yet register as the ops have not been
> implemented.
> > > > > > > >
> > > > > > > > I think you need Greg's ack on all this driver stuff -
> > > > > > > > particularly that a platform_device is OK.
> > > > > > >
> > > > > > > A platform_device is almost NEVER ok.
> > > > > > >
> > > > > > > Don't abuse it, make a real device on a real bus.  If you
> > > > > > > don't have a real bus and just need to create a device to
> > > > > > > hang other things off of, then use the virtual one, that's what it is there
> for.
> > > > > >
> > > > > > Ideally I'd like to see all the RDMA drivers that connect to
> > > > > > ethernet drivers use some similar scheme.
> > > > >
> > > > > Why?  They should be attached to a "real" device, why make any up?
> > > >
> > > > ? A "real" device, like struct pci_device, can only bind to one
> > > > driver. How can we bind it concurrently to net, rdma, scsi, etc?
> > >
> > > MFD was designed for this very problem.
> > >
> > > > > > This is for a PCI device that plugs into multiple subsystems
> > > > > > in the kernel, ie it has net driver functionality, rdma
> > > > > > functionality, some even have SCSI functionality
> > > > >
> > > > > Sounds like a MFD device, why aren't you using that
> > > > > functionality instead?
> > > >
> > > > This was also my advice, but in another email Jeff says:
> > > >
> > > >   MFD architecture was also considered, and we selected the simpler
> > > >   platform model. Supporting a MFD architecture would require an
> > > >   additional MFD core driver, individual platform netdev, RDMA function
> > > >   drivers, and stripping a large portion of the netdev drivers into
> > > >   MFD core. The sub-devices registered by MFD core for function
> > > >   drivers are indeed platform devices.
> > >
> > > So, "mfd is too hard, let's abuse a platform device" is ok?
> > >
> > > People have been wanting to do MFD drivers for PCI devices for a
> > > long time, it's about time someone actually did the work for it, I
> > > bet it will not be all that complex if tiny embedded drivers can do
> > > it :)
> > >
> > Hi Greg - Thanks for your feedback!
> >
> > We currently have 2 PCI function netdev drivers in the kernel (i40e &
> > ice) that support devices (x722 & e810) which are RDMA capable. Our
> > objective is to add a single unified RDMA driver (as this a subsystem
> > specific requirement) which needs to access HW resources from the
> > netdev PF drivers. Attaching platform devices from the netdev drivers
> > to the platform bus and having a single RDMA platform driver bind to
> > them and access these resources seemed like a simple approach to realize our
> objective. But seems like attaching platform devices is wrong. I would like to
> understand why.
> 
> Because that is NOT what a platform device is for.
> 
> It was originally created for those types of devices that live on the "platform" bus,
> i.e. things that are hard-wired and you just "know" are in your system because they
> are located at specific locations.  We used to generate them from board files, and
> then when we got DT, we create them from the resources that DT says where the
> locations of the devices are.
> 
> They are NOT to be abused and used whenever someone wants to put them
> somewhere in the "middle" of the device tree because they feel like they are easier
> to use instead of creating a real bus and drivers.
> 
> Yes, they do get abused, and I need to sweep the tree again and fix up all of the
> places where this has crept back in.  But now that I know you are thinking of doing
> this, I'll keep saying to NOT do it for your use case either :)

Thanks Greg for the explanation.
And yes, we went by some example usages in the tree.
> 
> > Are platform sub devices only to be added from an MFD core driver? I
> > am also wondering if MFD arch.  would allow for realizing a single
> > RDMA driver and whether we need an MFD core driver for each device,
> > x722 & e810 or whether it can be a single driver.
> 
> I do not know the details of how MFD works, please ask those developers for
> specifics.  If MFD doesn't work, then create a tiny virtual bus and make sub-
> devices out of that.  If you need a "generic" way to do this for PCI devices, then
> please create that as you are not the only one that keeps wanting this, as for some
> reason PCI hardware vendors don't like dividing up their devices in ways that
> would have made it much simpler to create individual devices (probably saves
> some gates and firmware complexity on the device).
> 

Thank you for laying out options. We will review internally and get back.

Shiraz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ