[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210319161722.GY2356281@nvidia.com>
Date: Fri, 19 Mar 2021 13:17:22 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Alex Williamson <alex.williamson@...hat.com>
Cc: Max Gurtovoy <mgurtovoy@...dia.com>,
Alexey Kardashevskiy <aik@...abs.ru>, cohuck@...hat.com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
liranl@...dia.com, oren@...dia.com, tzahio@...dia.com,
leonro@...dia.com, yarong@...dia.com, aviadye@...dia.com,
shahafs@...dia.com, artemp@...dia.com, kwankhede@...dia.com,
ACurrid@...dia.com, cjia@...dia.com, yishaih@...dia.com,
mjrosato@...ux.ibm.com, hch@....de
Subject: Re: [PATCH 8/9] vfio/pci: export nvlink2 support into vendor
vfio_pci drivers
On Fri, Mar 19, 2021 at 09:23:41AM -0600, Alex Williamson wrote:
> On Wed, 10 Mar 2021 14:57:57 +0200
> Max Gurtovoy <mgurtovoy@...dia.com> wrote:
> > On 3/10/2021 8:39 AM, Alexey Kardashevskiy wrote:
> > > On 09/03/2021 19:33, Max Gurtovoy wrote:
> > >> +static const struct pci_device_id nvlink2gpu_vfio_pci_table[] = {
> > >> + { PCI_VDEVICE(NVIDIA, 0x1DB1) }, /* GV100GL-A NVIDIA Tesla
> > >> V100-SXM2-16GB */
> > >> + { PCI_VDEVICE(NVIDIA, 0x1DB5) }, /* GV100GL-A NVIDIA Tesla
> > >> V100-SXM2-32GB */
> > >> + { PCI_VDEVICE(NVIDIA, 0x1DB8) }, /* GV100GL-A NVIDIA Tesla
> > >> V100-SXM3-32GB */
> > >> + { PCI_VDEVICE(NVIDIA, 0x1DF5) }, /* GV100GL-B NVIDIA Tesla
> > >> V100-SXM2-16GB */
> > >
> > >
> > > Where is this list from?
> > >
> > > Also, how is this supposed to work at the boot time? Will the kernel
> > > try binding let's say this one and nouveau? Which one is going to win?
> >
> > At boot time nouveau driver will win since the vfio drivers don't
> > declare MODULE_DEVICE_TABLE
>
> This still seems troublesome, AIUI the MODULE_DEVICE_TABLE is
> responsible for creating aliases so that kmod can figure out which
> modules to load, but what happens if all these vfio-pci modules are
> built into the kernel or the modules are already loaded?
I think we talked about this.. We still need a better way to control
binding of VFIO modules - now that we have device-specific modules we
must have these match tables to control what devices they connect
to.
Previously things used the binding of vfio_pci as the "switch" and
hardcoded all the matches inside it.
I'm still keen to try the "driver flavour" idea I outlined earlier,
but it is hard to say what will resonate with Greg.
> In the former case, I think it boils down to link order while the
> latter is generally considered even less deterministic since it depends
> on module load order. So if one of these vfio modules should get
> loaded before the native driver, I think devices could bind here first.
At this point - "don't link these statically", we could have a kconfig
to prevent it.
> Are there tricks/extensions we could use in driver overrides, for
> example maybe a compatibility alias such that one of these vfio-pci
> variants could match "vfio-pci"?
driver override is not really useful as soon as you have a match table
as its operation is to defeat the match table entirely. :(
Again, this is still more of a outline how things will look as we must
get through this before we can attempt to do something in the driver
core with Greg.
We could revise this series to not register drivers at all and keep
the uAPI view exactly as is today. This would allow enough code to
show Greg how some driver flavour thing would work.
If soemthing can't be done in the driver core, I'd propse to keep the
same basic outline Max has here, but make registering the "compat"
dynamic - it is basically a sub-driver design at that point and we
give up achieving module autoloading.
Jason
Powered by blists - more mailing lists