lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 24 Oct 2019 22:25:36 +0000
From:   "Ertman, David M" <david.m.ertman@...el.com>
To:     Jason Gunthorpe <jgg@...pe.ca>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>
CC:     "Nguyen, Anthony L" <anthony.l.nguyen@...el.com>,
        "Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "dledford@...hat.com" <dledford@...hat.com>,
        "Ismail, Mustafa" <mustafa.ismail@...el.com>,
        "Patil, Kiran" <kiran.patil@...el.com>,
        "lee.jones@...aro.org" <lee.jones@...aro.org>
Subject: RE: [RFC 01/20] ice: Initialize and register multi-function device
 to provide RDMA

> -----Original Message-----
> From: Jason Gunthorpe [mailto:jgg@...pe.ca]
> Sent: Thursday, October 24, 2019 12:11 PM
> To: gregkh@...uxfoundation.org
> Cc: Ertman, David M <david.m.ertman@...el.com>; Nguyen, Anthony L
> <anthony.l.nguyen@...el.com>; Kirsher, Jeffrey T
> <jeffrey.t.kirsher@...el.com>; netdev@...r.kernel.org; linux-
> rdma@...r.kernel.org; dledford@...hat.com; Ismail, Mustafa
> <mustafa.ismail@...el.com>; Patil, Kiran <kiran.patil@...el.com>
> Subject: Re: [RFC 01/20] ice: Initialize and register multi-function device to
> provide RDMA
> 
> On Thu, Oct 24, 2019 at 02:56:59PM -0400, gregkh@...uxfoundation.org wrote:
> > On Wed, Oct 23, 2019 at 03:01:09PM -0300, Jason Gunthorpe wrote:
> > > On Wed, Oct 23, 2019 at 05:55:38PM +0000, Ertman, David M wrote:
> > > > > Did any resolution happen here? Dave, do you know what to do to
> > > > > get Greg's approval?
> > > > >
> > > > > Jason
> > > >
> > > > This was the last communication that I saw on this topic.  I was
> > > > taking Greg's silence as "Oh ok, that works" :)  I hope I was not being too
> optimistic!
> > > >
> > > > If there is any outstanding issue I am not aware of it, but please
> > > > let me know if I am out of the loop!
> > > >
> > > > Greg, if you have any other concerns or questions I would be happy to
> address them!
> > >
> > > I was hoping to hear Greg say that taking a pci_device, feeding it
> > > to the multi-function-device stuff to split it to a bunch of
> > > platform_device's is OK, or that mfd should be changed somehow..
> >
> > Again, platform devices are ONLY for actual platform devices.  A PCI
> > device is NOT a platform device, sorry.
> 
> To be fair to David, IIRC, you did suggest mfd as the solution here some months
> ago, but I think you also said it might need some fixing
> :)
> 
> > If MFD needs to be changed to handle non-platform devices, fine, but
> > maybe what you really need to do here is make your own "bus" of
> > individual devices and have drivers for them, as you can't have a
> > "normal" PCI driver for these.
> 
> It does feel like MFD is the cleaner model here otherwise we'd have each
> driver making its own custom buses for its multi-function capability..
> 
> David, do you see some path to fix mfd to not use platform devices?
> 
> Maybe it needs a MFD bus type and a 'struct mfd_device' ?
> 
> I guess I'll drop these patches until it is sorted.
> 
> Jason


The original submission of the RDMA driver had separate drivers to
interact with the ice and i40e LAN drivers.  There was only about 2000
lines of code different between them, so a request was (rightly so)
made to unify the RDMA drivers into a single driver.

Our original submission for IIDC had a "software bus" that the ice driver
was creating.  The problem, now that the RDMA driver is a unified driver
for both the ice and i40e drivers, each of which would need to create their
own bus.  So, we cannot have module dependencies for the irdma driver,
as we don't know which hardware the user will have installed in the system.
or which drivers will be loaded in what order.  As new hardware is supported
(presumably by the same irdma driver) this will only get more complicated.
For instance, if the ice driver loads, then the irdma, then the i40e.  The irdma
will have no notice that a new bus was created that it needs to register with
by the i40e driver.

Our original solution to this problem was with netdev notifiers, which met with
resistance, and the statement that the bus infrastructure was the proper way to
approach the interaction of the LAN driver and peer.  This did turn out to be a
much more elegant way to approach the issue.

The direct access of the platform bus was unacceptable, and the MFD sub-system
was suggested by Greg as the solution.  The MFD sub-system uses the platform
bus in the background as a base to perform its functions, since it is a purely software
construct that is handy and fulfills its needs.  The question then is:  If the MFD sub-
system is using the platform bus for all of its background functionality, is the platform
bus really only for platform devices?  It seems that the kernel is already using the
platform bus as a generic software based bus, and it fulfills the role efficiently.

Dave E.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ