[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191108111238.578f44f1@cakuba>
Date: Fri, 8 Nov 2019 11:12:38 -0800
From: Jakub Kicinski <jakub.kicinski@...ronome.com>
To: Parav Pandit <parav@...lanox.com>
Cc: Jason Gunthorpe <jgg@...pe.ca>, Jiri Pirko <jiri@...nulli.us>,
"Ertman@...pe.ca" <Ertman@...pe.ca>,
David M <david.m.ertman@...el.com>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"alex.williamson@...hat.com" <alex.williamson@...hat.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Saeed Mahameed <saeedm@...lanox.com>,
"kwankhede@...dia.com" <kwankhede@...dia.com>,
"leon@...nel.org" <leon@...nel.org>,
"cohuck@...hat.com" <cohuck@...hat.com>,
Jiri Pirko <jiri@...lanox.com>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
Or Gerlitz <gerlitz.or@...il.com>
Subject: Re: [PATCH net-next 00/19] Mellanox, mlx5 sub function support
On Fri, 8 Nov 2019 15:40:22 +0000, Parav Pandit wrote:
> > The new intel driver has been having a very similar discussion about how to
> > model their 'multi function device' ie to bind RDMA and other drivers to a
> > shared PCI function, and I think that discussion settled on adding a new bus?
> >
> > Really these things are all very similar, it would be nice to have a clear
> > methodology on how to use the device core if a single PCI device is split by
> > software into multiple different functional units and attached to different
> > driver instances.
> >
> > Currently there is alot of hacking in this area.. And a consistent scheme
> > might resolve the ugliness with the dma_ops wrappers.
> >
> > We already have the 'mfd' stuff to support splitting platform devices, maybe
> > we need to create a 'pci-mfd' to support splitting PCI devices?
> >
> > I'm not really clear how mfd and mdev relate, I always thought mdev was
> > strongly linked to vfio.
> >
> Mdev at beginning was strongly linked to vfio, but as I mentioned above it is addressing more use case.
>
> I observed that discussion, but was not sure of extending mdev further.
>
> One way to do for Intel drivers to do is after series [9].
> Where PCI driver says, MDEV_CLASS_ID_I40_FOO
> RDMA driver mdev_register_driver(), matches on it and does the probe().
Yup, FWIW to me the benefit of reusing mdevs for the Intel case vs
muddying the purpose of mdevs is not a clear trade off.
IMHO MFD should be of more natural use for Intel, since it's about
providing different functionality rather than virtual slices of the
same device.
> > At the very least if it is agreed mdev should be the vehicle here, then it
> > should also be able to solve the netdev/rdma hookup problem too.
Powered by blists - more mailing lists