[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191108134559.42fbceff@cakuba>
Date: Fri, 8 Nov 2019 13:45:59 -0800
From: Jakub Kicinski <jakub.kicinski@...ronome.com>
To: Jason Gunthorpe <jgg@...pe.ca>
Cc: Parav Pandit <parav@...lanox.com>, Jiri Pirko <jiri@...nulli.us>,
David M <david.m.ertman@...el.com>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"alex.williamson@...hat.com" <alex.williamson@...hat.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Saeed Mahameed <saeedm@...lanox.com>,
"kwankhede@...dia.com" <kwankhede@...dia.com>,
"leon@...nel.org" <leon@...nel.org>,
"cohuck@...hat.com" <cohuck@...hat.com>,
Jiri Pirko <jiri@...lanox.com>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
Or Gerlitz <gerlitz.or@...il.com>
Subject: Re: [PATCH net-next 00/19] Mellanox, mlx5 sub function support
On Fri, 8 Nov 2019 16:12:53 -0400, Jason Gunthorpe wrote:
> On Fri, Nov 08, 2019 at 11:12:38AM -0800, Jakub Kicinski wrote:
> > On Fri, 8 Nov 2019 15:40:22 +0000, Parav Pandit wrote:
> > > Mdev at beginning was strongly linked to vfio, but as I mentioned
> > > above it is addressing more use case.
> > >
> > > I observed that discussion, but was not sure of extending mdev further.
> > >
> > > One way to do for Intel drivers to do is after series [9].
> > > Where PCI driver says, MDEV_CLASS_ID_I40_FOO
> > > RDMA driver mdev_register_driver(), matches on it and does the probe().
> >
> > Yup, FWIW to me the benefit of reusing mdevs for the Intel case vs
> > muddying the purpose of mdevs is not a clear trade off.
>
> IMHO, mdev has amdev_parent_ops structure clearly intended to link it
> to vfio, so using a mdev for something not related to vfio seems like
> a poor choice.
Yes, my suggestion to use mdev was entirely based on the premise that
the purpose of this work is to get vfio working.. otherwise I'm unclear
as to why we'd need a bus in the first place. If this is just for
containers - we have macvlan offload for years now, with no need for a
separate device.
> I suppose this series is the start and we will eventually see the
> mlx5's mdev_parent_ops filled in to support vfio - but *right now*
> this looks identical to the problem most of the RDMA capable net
> drivers have splitting into a 'core' and a 'function'
On the RDMA/Intel front, would you mind explaining what the main
motivation for the special buses is? I'm a little confurious.
My understanding is MFD was created to help with cases where single
device has multiple pieces of common IP in it. Do modern RDMA cards
really share IP across generations? Is there a need to reload the
drivers for the separate pieces (I wonder if the devlink reload doesn't
belong to the device model :().
Or is it purely an abstraction and people like abstractions?
Powered by blists - more mailing lists