[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230614132047.519abe95.alex.williamson@redhat.com>
Date: Wed, 14 Jun 2023 13:20:47 -0600
From: Alex Williamson <alex.williamson@...hat.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: ankita@...dia.com, aniketa@...dia.com, cjia@...dia.com,
kwankhede@...dia.com, targupta@...dia.com, vsethi@...dia.com,
acurrid@...dia.com, apopple@...dia.com, jhubbard@...dia.com,
danw@...dia.com, kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/1] vfio/nvgpu: Add vfio pci variant module for
grace hopper
On Wed, 14 Jun 2023 14:55:28 -0300
Jason Gunthorpe <jgg@...dia.com> wrote:
> On Tue, Jun 13, 2023 at 01:24:02PM -0600, Alex Williamson wrote:
>
> > I'd even forgotten about the sparse mmap solution here, that's even
> > better than trying to do something clever with the mmap.
>
> Okay, Ankit please try this, it sounds good
>
> > > You may be right, I think this patch is trying to make things
> > > automatic for user, but a dedicated machine type might make more
> > > sense.
> >
> > Juan and I discussed this with Ankit last week, there are a lot of down
> > sides with another machine type, but the automatic manipulation of the
> > machine is still problematic. Another option we have is to use QEMU
> > command line options for each feature. For example we already support
> > NUMA VM configurations and loading command line ACPI tables, hopefully
> > also associating devices to nodes. Do we end up with just a
> > configuration spec for the VM to satisfy the in-guest drivers?
> > Potentially guest driver requirements may changes over time, so a hard
> > coded recipe built-in to QEMU might not be the best solution anyway.
>
> Let's have those discussions settle then, I know there are a few
> different ideas here people are looking at.
>
> > I think NVIDIA might have an interest in enabling Atomic Ops support in
> > VMs as well, so please comment in the series thread if there are
> > concerns here or if anyone can definitively says that another guest OS
> > we might care about does cache root port capability bits. Thanks,
>
> I expect we do - I haven't heard of atomic ops specifically yet
> though.
>
> We just did a big exercise on relaxed ordering which is similarly
> troubled.
>
> Here we deciced to just not use the VM's config space at all. The
> device itself knows if it can do relaxed ordering and it just reports
> this directly to the driver.
>
> In many ways I would prefer to do the same for atomic.. I haven't
> checked fully but I think we do this anyhow as you can see mlx5 simply
> tries to enable PCI atomics but doesn't appear to do anything with the
> result of it. I expect the actual success/fail is looped back through
> the device interface itself.
>
> So, for mlx5, it probably already works in most real cases. Passing a
> PF might not work I guess.
>
> It is not a satisfying answer from a VMM design perspective..
>
> Some qemu command line to say what root ports with what atomic caps to
> create seems like a reasonable thing to do.
The referenced QEMU proposal puts a number of restrictions on
automatically flipping bits on the root port, ex. as exposed in the VM
the endpoint must be directly connected to a root port (avoiding
complications around atomic ops routing support) and must be a
single function device at devfn 0x0 (avoiding heterogeneous paths on
the host). It also tests the root port bits to make sure they aren't
otherwise set in order to be compatible with some future root port
device switch to enable fixed atomic completer support.
This tries to balance the idea that we want to support device switches
for these sort of fine grained VM configuration, but there are also
isolated cases which can be automatically enabled that can potentially
cover the vast majority of use cases.
OTOH, trying to do something automatic for 'AtomicOps Routing Support'
looks far more challenging and we probably would rely on command line
device switches for that.
Regarding relaxed ordering, are we talking about the 'No RO-enabled
PR-PR Passing' bit in the devcap2 register? Unfortunately that bit is
labeled HwInit, so we don't have the same leniency towards modifying it
at runtime as we do for the immediately preceding AtomicOps completer
support bits. In my interpretation, that bit also only seems to be
reporting whether a specific reordering is implemented, so more
important in determining expected performance than functionality(?)
In general, I think we put driver writers in an awkward place if they
start trying things that are clearly not supported as reported by
hardware capability bits. Error handling can be pretty fragile,
especially when value-add firmware thinks it knows best. Thanks,
Alex
Powered by blists - more mailing lists