[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190704134612.GB10963@kroah.com>
Date: Thu, 4 Jul 2019 15:46:12 +0200
From: Greg KH <gregkh@...uxfoundation.org>
To: Jason Gunthorpe <jgg@...lanox.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"dledford@...hat.com" <dledford@...hat.com>,
Tony Nguyen <anthony.l.nguyen@...el.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"nhorman@...hat.com" <nhorman@...hat.com>,
"sassmann@...hat.com" <sassmann@...hat.com>,
"poswald@...e.com" <poswald@...e.com>,
"mustafa.ismail@...el.com" <mustafa.ismail@...el.com>,
"shiraz.saleem@...el.com" <shiraz.saleem@...el.com>,
Dave Ertman <david.m.ertman@...el.com>,
Andrew Bowers <andrewx.bowers@...el.com>
Subject: Re: [net-next 1/3] ice: Initialize and register platform device to
provide RDMA
On Thu, Jul 04, 2019 at 12:48:29PM +0000, Jason Gunthorpe wrote:
> On Thu, Jul 04, 2019 at 02:42:47PM +0200, Greg KH wrote:
> > On Thu, Jul 04, 2019 at 12:37:33PM +0000, Jason Gunthorpe wrote:
> > > On Thu, Jul 04, 2019 at 02:29:50PM +0200, Greg KH wrote:
> > > > On Thu, Jul 04, 2019 at 12:16:41PM +0000, Jason Gunthorpe wrote:
> > > > > On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> > > > > > From: Tony Nguyen <anthony.l.nguyen@...el.com>
> > > > > >
> > > > > > The RDMA block does not advertise on the PCI bus or any other bus.
> > > > > > Thus the ice driver needs to provide access to the RDMA hardware block
> > > > > > via a virtual bus; utilize the platform bus to provide this access.
> > > > > >
> > > > > > This patch initializes the driver to support RDMA as well as creates
> > > > > > and registers a platform device for the RDMA driver to register to. At
> > > > > > this point the driver is fully initialized to register a platform
> > > > > > driver, however, can not yet register as the ops have not been
> > > > > > implemented.
> > > > >
> > > > > I think you need Greg's ack on all this driver stuff - particularly
> > > > > that a platform_device is OK.
> > > >
> > > > A platform_device is almost NEVER ok.
> > > >
> > > > Don't abuse it, make a real device on a real bus. If you don't have a
> > > > real bus and just need to create a device to hang other things off of,
> > > > then use the virtual one, that's what it is there for.
> > >
> > > Ideally I'd like to see all the RDMA drivers that connect to ethernet
> > > drivers use some similar scheme.
> >
> > Why? They should be attached to a "real" device, why make any up?
>
> ? A "real" device, like struct pci_device, can only bind to one
> driver. How can we bind it concurrently to net, rdma, scsi, etc?
MFD was designed for this very problem.
> > > This is for a PCI device that plugs into multiple subsystems in the
> > > kernel, ie it has net driver functionality, rdma functionality, some
> > > even have SCSI functionality
> >
> > Sounds like a MFD device, why aren't you using that functionality
> > instead?
>
> This was also my advice, but in another email Jeff says:
>
> MFD architecture was also considered, and we selected the simpler
> platform model. Supporting a MFD architecture would require an
> additional MFD core driver, individual platform netdev, RDMA function
> drivers, and stripping a large portion of the netdev drivers into
> MFD core. The sub-devices registered by MFD core for function
> drivers are indeed platform devices.
So, "mfd is too hard, let's abuse a platform device" is ok?
People have been wanting to do MFD drivers for PCI devices for a long
time, it's about time someone actually did the work for it, I bet it
will not be all that complex if tiny embedded drivers can do it :)
thanks,
greg k-h
Powered by blists - more mailing lists