[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191108205204.GB1277001@kroah.com>
Date: Fri, 8 Nov 2019 21:52:04 +0100
From: "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>
To: Jason Gunthorpe <jgg@...pe.ca>
Cc: Parav Pandit <parav@...lanox.com>,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
Jiri Pirko <jiri@...nulli.us>,
David M <david.m.ertman@...el.com>,
"alex.williamson@...hat.com" <alex.williamson@...hat.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Saeed Mahameed <saeedm@...lanox.com>,
"kwankhede@...dia.com" <kwankhede@...dia.com>,
"leon@...nel.org" <leon@...nel.org>,
"cohuck@...hat.com" <cohuck@...hat.com>,
Jiri Pirko <jiri@...lanox.com>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
Or Gerlitz <gerlitz.or@...il.com>
Subject: Re: [PATCH net-next 00/19] Mellanox, mlx5 sub function support
On Fri, Nov 08, 2019 at 04:32:09PM -0400, Jason Gunthorpe wrote:
> On Fri, Nov 08, 2019 at 08:20:43PM +0000, Parav Pandit wrote:
> >
> >
> > > From: Jason Gunthorpe <jgg@...pe.ca>
> > > On Fri, Nov 08, 2019 at 11:12:38AM -0800, Jakub Kicinski wrote:
> > > > On Fri, 8 Nov 2019 15:40:22 +0000, Parav Pandit wrote:
> > > > > > The new intel driver has been having a very similar discussion
> > > > > > about how to model their 'multi function device' ie to bind RDMA
> > > > > > and other drivers to a shared PCI function, and I think that discussion
> > > settled on adding a new bus?
> > > > > >
> > > > > > Really these things are all very similar, it would be nice to have
> > > > > > a clear methodology on how to use the device core if a single PCI
> > > > > > device is split by software into multiple different functional
> > > > > > units and attached to different driver instances.
> > > > > >
> > > > > > Currently there is alot of hacking in this area.. And a consistent
> > > > > > scheme might resolve the ugliness with the dma_ops wrappers.
> > > > > >
> > > > > > We already have the 'mfd' stuff to support splitting platform
> > > > > > devices, maybe we need to create a 'pci-mfd' to support splitting PCI
> > > devices?
> > > > > >
> > > > > > I'm not really clear how mfd and mdev relate, I always thought
> > > > > > mdev was strongly linked to vfio.
> > > > > >
> > > > >
> > > > > Mdev at beginning was strongly linked to vfio, but as I mentioned
> > > > > above it is addressing more use case.
> > > > >
> > > > > I observed that discussion, but was not sure of extending mdev further.
> > > > >
> > > > > One way to do for Intel drivers to do is after series [9].
> > > > > Where PCI driver says, MDEV_CLASS_ID_I40_FOO RDMA driver
> > > > > mdev_register_driver(), matches on it and does the probe().
> > > >
> > > > Yup, FWIW to me the benefit of reusing mdevs for the Intel case vs
> > > > muddying the purpose of mdevs is not a clear trade off.
> > >
> > > IMHO, mdev has amdev_parent_ops structure clearly intended to link it to vfio,
> > > so using a mdev for something not related to vfio seems like a poor choice.
> > >
> > Splitting mdev_parent_ops{} is already in works for larger use case in series [1] for virtio.
> >
> > [1] https://patchwork.kernel.org/patch/11233127/
>
> Weird. So what is mdev actually providing and what does it represent
> if the entire driver facing API surface is under a union?
>
> This smells a lot like it is re-implementing a bus.. AFAIK bus is
> supposed to represent the in-kernel API the struct device presents to
> drivers.
Yes, yes yes yes...
I'm getting tired of saying the same thing here, just use a bus, that's
what it is there for.
greg k-h
Powered by blists - more mailing lists