[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191108144054.GC10956@ziepe.ca>
Date: Fri, 8 Nov 2019 10:40:54 -0400
From: Jason Gunthorpe <jgg@...pe.ca>
To: Jiri Pirko <jiri@...nulli.us>, Ertman@...pe.ca,
David M <david.m.ertman@...el.com>, gregkh@...uxfoundation.org
Cc: Jakub Kicinski <jakub.kicinski@...ronome.com>,
Parav Pandit <parav@...lanox.com>, alex.williamson@...hat.com,
davem@...emloft.net, kvm@...r.kernel.org, netdev@...r.kernel.org,
saeedm@...lanox.com, kwankhede@...dia.com, leon@...nel.org,
cohuck@...hat.com, jiri@...lanox.com, linux-rdma@...r.kernel.org,
Or Gerlitz <gerlitz.or@...il.com>
Subject: Re: [PATCH net-next 00/19] Mellanox, mlx5 sub function support
On Fri, Nov 08, 2019 at 01:12:33PM +0100, Jiri Pirko wrote:
> Thu, Nov 07, 2019 at 09:32:34PM CET, jakub.kicinski@...ronome.com wrote:
> >On Thu, 7 Nov 2019 10:04:48 -0600, Parav Pandit wrote:
> >> Mellanox sub function capability allows users to create several hundreds
> >> of networking and/or rdma devices without depending on PCI SR-IOV support.
> >
> >You call the new port type "sub function" but the devlink port flavour
> >is mdev.
> >
> >As I'm sure you remember you nacked my patches exposing NFP's PCI
> >sub functions which are just regions of the BAR without any mdev
> >capability. Am I in the clear to repost those now? Jiri?
>
> Well question is, if it makes sense to have SFs without having them as
> mdev? I mean, we discussed the modelling thoroughtly and eventually we
> realized that in order to model this correctly, we need SFs on "a bus".
> Originally we were thinking about custom bus, but mdev is already there
> to handle this.
Did anyone consult Greg on this?
The new intel driver has been having a very similar discussion about
how to model their 'multi function device' ie to bind RDMA and other
drivers to a shared PCI function, and I think that discussion settled
on adding a new bus?
Really these things are all very similar, it would be nice to have a
clear methodology on how to use the device core if a single PCI device
is split by software into multiple different functional units and
attached to different driver instances.
Currently there is alot of hacking in this area.. And a consistent
scheme might resolve the ugliness with the dma_ops wrappers.
We already have the 'mfd' stuff to support splitting platform devices,
maybe we need to create a 'pci-mfd' to support splitting PCI devices?
I'm not really clear how mfd and mdev relate, I always thought mdev
was strongly linked to vfio.
At the very least if it is agreed mdev should be the vehicle here,
then it should also be able to solve the netdev/rdma hookup problem
too.
Jason
Powered by blists - more mailing lists