[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230918111949.1d6c8482.alex.williamson@redhat.com>
Date: Mon, 18 Sep 2023 11:19:49 -0600
From: Alex Williamson <alex.williamson@...hat.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: ankita@...dia.com, yishaih@...dia.com,
shameerali.kolothum.thodi@...wei.com, kevin.tian@...el.com,
aniketa@...dia.com, cjia@...dia.com, kwankhede@...dia.com,
targupta@...dia.com, vsethi@...dia.com, acurrid@...dia.com,
apopple@...dia.com, jhubbard@...dia.com, danw@...dia.com,
anuaggarwal@...dia.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v10 1/1] vfio/nvgpu: Add vfio pci variant module for
grace hopper
On Mon, 18 Sep 2023 11:49:23 -0300
Jason Gunthorpe <jgg@...dia.com> wrote:
> On Mon, Sep 18, 2023 at 08:27:48AM -0600, Alex Williamson wrote:
>
> > > > This looks like a giant red flag that this approach of masquerading the
> > > > coherent memory as a PCI BAR is the wrong way to go. If the VMM needs
> > > > to know about this coherent memory, it needs to get that information
> > > > in-band.
> > >
> > > The VMM part doesn't need this flag, nor does the VM. The
> > > orchestration needs to know when to setup the pxm stuff.
> >
> > Subject: [PATCH v1 1/4] vfio: new command line params for device memory NUMA nodes
> > --- a/hw/vfio/pci.c
> > +++ b/hw/vfio/pci.c
> > ...
> > +static bool vfio_pci_read_cohmem_support_sysfs(VFIODevice *vdev)
> > +{
> > + gchar *contents = NULL;
> > + gsize length;
> > + char *path;
> > + bool ret = false;
> > + uint32_t supported;
> > +
> > + path = g_strdup_printf("%s/coherent_mem", vdev->sysfsdev);
> > + if (g_file_get_contents(path, &contents, &length, NULL) && length > 0) {
> > + if ((sscanf(contents, "%u", &supported) == 1) && supported) {
> > + ret = true;
> > + }
> > + }
>
> Yes, but it drives the ACPI pxm auto configuration stuff, not really
> vfio stuff.
>
> > > I think we should drop the sysfs for now until the qemu thread about
> > > the pxm stuff settles into an idea.
> > >
> > > When the qemu API is clear we can have a discussion on what component
> > > should detect this driver and setup the pxm things, then answer the
> > > how should the detection work from the kernel side.
> > >
> > > > be reaching out to arbitrary sysfs attributes. Minimally this
> > > > information should be provided via a capability on the region info
> > > > chain,
> > >
> > > That definitely isn't suitable, eg libvirt won't have access to inband
> > > information if it turns out libvirt is supposed to setup the pxm qemu
> > > arguments?
> >
> > Why would libvirt look for a "coherent_mem" attribute in sysfs when it
> > can just look at the driver used by the device.
>
> Sure, if that is consensus. Also I think coherent_mem is a terrible
> sysfs name for this, it should be more like 'num_pxm_nodes' or
> something.
>
> > Part of the QEMU series is also trying to invoke the VM
> > configuration based only on this
> > device being attached to avoid libvirt orchestration changes:
>
> Right, that is where it gets confusing - it mixes the vfio world in
> qemu with the pxm world. That should be cleaned up somehow.
>
> > > > A "coherent_mem" attribute on the device provides a very weak
> > > > association to the memory region it's trying to describe.
> > >
> > > That's because it's use has nothing to do with the memory region :)
> >
> > So we're creating a very generic sysfs attribute, which is meant to be
> > used by orchestration to invoke device specific configuration, but is
> > currently only proposed for use by the VMM. The orchestration problem
> > doesn't really exist, libvirt could know simply by the driver name that
> > the device requires this configuration.
>
> Yep
>
> > And the VMM usage is self inflicted because we insist on
> > masquerading the coherent memory as a nondescript PCI BAR rather
> > than providing a device specific region to enlighten the VMM to this
> > unique feature.
>
> I see it as two completely seperate things.
>
> 1) VFIO and qemu creating a vPCI device. Here we don't need this
> information.
>
> 2) This ACPI pxm stuff to emulate the bare metal FW.
> Including a proposal for auto-detection what kind of bare metal FW
> is being used.
>
> This being a poor idea for #2 doesn't jump to problems with #1, it
> just says more work is needed on the ACPI PXM stuff.
But I don't think we've justified why it's a good idea for #1. Does
the composed vPCI device with coherent memory masqueraded as BAR2 have
a stand alone use case without #2?
My understanding based on these series is that the guest driver somehow
carves up the coherent memory among a set of memory-less NUMA nodes
(how to know how many?) created by the VMM and reported via the _DSD for
the device. If this sort of configuration is a requirement for making
use of the coherent memory, then what exactly becomes easier by the fact
that it's exposed as a PCI BAR?
In fact, if it weren't a BAR I'd probably suggest that the whole
configuration of this device should be centered around a new
nvidia-gpu-mem object. That object could reference the ID of a
vfio-pci device providing the coherent memory via a device specific
region and be provided with a range of memory-less nodes created for
its use. The object would insert the coherent memory range into the VM
address space and provide the device properties to make use of it in
the same way as done on bare metal.
It seems to me that the PCI BAR representation of coherent memory is
largely just a shortcut to getting it into the VM address space, but
it's also leading us down these paths where the "pxm stuff" is invoked
based on the device attached to the VM, which is getting a lot of
resistance. Thanks,
Alex
Powered by blists - more mailing lists