lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AM0PR05MB486658D1D2A4F3999ED95D45D17B0@AM0PR05MB4866.eurprd05.prod.outlook.com>
Date:   Fri, 8 Nov 2019 15:40:22 +0000
From:   Parav Pandit <parav@...lanox.com>
To:     Jason Gunthorpe <jgg@...pe.ca>, Jiri Pirko <jiri@...nulli.us>,
        "Ertman@...pe.ca" <Ertman@...pe.ca>,
        David M <david.m.ertman@...el.com>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>
CC:     Jakub Kicinski <jakub.kicinski@...ronome.com>,
        "alex.williamson@...hat.com" <alex.williamson@...hat.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Saeed Mahameed <saeedm@...lanox.com>,
        "kwankhede@...dia.com" <kwankhede@...dia.com>,
        "leon@...nel.org" <leon@...nel.org>,
        "cohuck@...hat.com" <cohuck@...hat.com>,
        Jiri Pirko <jiri@...lanox.com>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        Or Gerlitz <gerlitz.or@...il.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: RE: [PATCH net-next 00/19] Mellanox, mlx5 sub function support

Hi Jason,

+ Greg

> -----Original Message-----
> From: Jason Gunthorpe <jgg@...pe.ca>
> Sent: Friday, November 8, 2019 8:41 AM
> To: Jiri Pirko <jiri@...nulli.us>; Ertman@...pe.ca; David M
> <david.m.ertman@...el.com>; gregkh@...uxfoundation.org
> Cc: Jakub Kicinski <jakub.kicinski@...ronome.com>; Parav Pandit
> <parav@...lanox.com>; alex.williamson@...hat.com;
> davem@...emloft.net; kvm@...r.kernel.org; netdev@...r.kernel.org;
> Saeed Mahameed <saeedm@...lanox.com>; kwankhede@...dia.com;
> leon@...nel.org; cohuck@...hat.com; Jiri Pirko <jiri@...lanox.com>; linux-
> rdma@...r.kernel.org; Or Gerlitz <gerlitz.or@...il.com>
> Subject: Re: [PATCH net-next 00/19] Mellanox, mlx5 sub function support
> 
> On Fri, Nov 08, 2019 at 01:12:33PM +0100, Jiri Pirko wrote:
> > Thu, Nov 07, 2019 at 09:32:34PM CET, jakub.kicinski@...ronome.com
> wrote:
> > >On Thu,  7 Nov 2019 10:04:48 -0600, Parav Pandit wrote:
> > >> Mellanox sub function capability allows users to create several
> > >> hundreds of networking and/or rdma devices without depending on PCI
> SR-IOV support.
> > >
> > >You call the new port type "sub function" but the devlink port
> > >flavour is mdev.
> > >
> > >As I'm sure you remember you nacked my patches exposing NFP's PCI sub
> > >functions which are just regions of the BAR without any mdev
> > >capability. Am I in the clear to repost those now? Jiri?
> >
> > Well question is, if it makes sense to have SFs without having them as
> > mdev? I mean, we discussed the modelling thoroughtly and eventually we
> > realized that in order to model this correctly, we need SFs on "a bus".
> > Originally we were thinking about custom bus, but mdev is already
> > there to handle this.
> 
> Did anyone consult Greg on this?
> 
Back when I started with subdev bus in March, we consulted Greg and mdev maintainers.
After which we settled on extending mdev for wider use case, more below.
It is extended for multiple users for example for virtio too in addition to vfio and mlx5_core.

> The new intel driver has been having a very similar discussion about how to
> model their 'multi function device' ie to bind RDMA and other drivers to a
> shared PCI function, and I think that discussion settled on adding a new bus?
> 
> Really these things are all very similar, it would be nice to have a clear
> methodology on how to use the device core if a single PCI device is split by
> software into multiple different functional units and attached to different
> driver instances.
> 
> Currently there is alot of hacking in this area.. And a consistent scheme
> might resolve the ugliness with the dma_ops wrappers.
> 
> We already have the 'mfd' stuff to support splitting platform devices, maybe
> we need to create a 'pci-mfd' to support splitting PCI devices?
> 
> I'm not really clear how mfd and mdev relate, I always thought mdev was
> strongly linked to vfio.
> 
Mdev at beginning was strongly linked to vfio, but as I mentioned above it is addressing more use case.

I observed that discussion, but was not sure of extending mdev further.

One way to do for Intel drivers to do is after series [9].
Where PCI driver says, MDEV_CLASS_ID_I40_FOO
RDMA driver mdev_register_driver(), matches on it and does the probe().

> At the very least if it is agreed mdev should be the vehicle here, then it
> should also be able to solve the netdev/rdma hookup problem too.
> 
> Jason

[9] https://patchwork.ozlabs.org/patch/1190425

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ