[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210323134247.GC2356281@nvidia.com>
Date: Tue, 23 Mar 2021 10:42:47 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Christoph Hellwig <hch@....de>
Cc: Alex Williamson <alex.williamson@...hat.com>,
Max Gurtovoy <mgurtovoy@...dia.com>,
Alexey Kardashevskiy <aik@...abs.ru>, cohuck@...hat.com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
liranl@...dia.com, oren@...dia.com, tzahio@...dia.com,
leonro@...dia.com, yarong@...dia.com, aviadye@...dia.com,
shahafs@...dia.com, artemp@...dia.com, kwankhede@...dia.com,
ACurrid@...dia.com, cjia@...dia.com, yishaih@...dia.com,
mjrosato@...ux.ibm.com
Subject: Re: [PATCH 8/9] vfio/pci: export nvlink2 support into vendor
vfio_pci drivers
On Tue, Mar 23, 2021 at 02:17:09PM +0100, Christoph Hellwig wrote:
> On Mon, Mar 22, 2021 at 01:44:11PM -0300, Jason Gunthorpe wrote:
> > This isn't quite the scenario that needs solving. Lets go back to
> > Max's V1 posting:
> >
> > The mlx5_vfio_pci.c pci_driver matches this:
> >
> > + { PCI_DEVICE_SUB(PCI_VENDOR_ID_REDHAT_QUMRANET, 0x1042,
> > + PCI_VENDOR_ID_MELLANOX, PCI_ANY_ID) }, /* Virtio SNAP controllers */
> >
> > This overlaps with the match table in
> > drivers/virtio/virtio_pci_common.c:
> >
> > { PCI_DEVICE(PCI_VENDOR_ID_REDHAT_QUMRANET, PCI_ANY_ID) },
> >
> > So, if we do as you propose we have to add something mellanox specific
> > to virtio_pci_common which seems to me to just repeating this whole
> > problem except in more drivers.
>
> Oh, yikes.
This is why I keep saying it is a VFIO driver - it has no relation to
the normal kernel drivers on the hypervisor. Even loading a normal
kernel driver and switching to a VFIO mode would be unacceptably
slow/disruptive.
The goal is to go directly to a VFIO mode driver with PCI driver auto
probing disabled to avoid attaching a regular driver. Big servers will
have 1000's of these things.
> > The general thing that that is happening is people are adding VM
> > migration capability to existing standard PCI interfaces like VFIO,
> > NVMe, etc
>
> Well, if a migration capability is added to virtio (or NVMe) it should
> be standardized and not vendor specific.
It would be nice, but it would be a challenging standard to write.
I think the industry is still in the pre-standards mode of trying to
even figure out how this stuff should work.
IMHO PCI sig needs to tackle a big part of this as we can't embed any
migration controls in the VF itself, it has to be secure for only
hypervisor use.
What we've got now is a Linux standard in VFIO where the uAPI to
manage migration is multi-vendor and we want to plug drivers into
that.
If in a few years the industry also develops HW standards then I
imagine using the same mechanism to plug in these standards based
implementation.
Jason
Powered by blists - more mailing lists