lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240103110016.5067b42e.alex.williamson@redhat.com>
Date: Wed, 3 Jan 2024 11:00:16 -0700
From: Alex Williamson <alex.williamson@...hat.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: Ankit Agrawal <ankita@...dia.com>, Yishai Hadas <yishaih@...dia.com>,
 "shameerali.kolothum.thodi@...wei.com"
 <shameerali.kolothum.thodi@...wei.com>, "kevin.tian@...el.com"
 <kevin.tian@...el.com>, "eric.auger@...hat.com" <eric.auger@...hat.com>,
 "brett.creeley@....com" <brett.creeley@....com>, "horms@...nel.org"
 <horms@...nel.org>, Aniket Agashe <aniketa@...dia.com>, Neo Jia
 <cjia@...dia.com>, Kirti Wankhede <kwankhede@...dia.com>, "Tarun Gupta
 (SW-GPU)" <targupta@...dia.com>, Vikram Sethi <vsethi@...dia.com>, Andy
 Currid <acurrid@...dia.com>, Alistair Popple <apopple@...dia.com>, John
 Hubbard <jhubbard@...dia.com>, Dan Williams <danw@...dia.com>, "Anuj
 Aggarwal (SW-GPU)" <anuaggarwal@...dia.com>, Matt Ochs <mochs@...dia.com>,
 "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "linux-kernel@...r.kernel.org"
 <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v15 1/1] vfio/nvgrace-gpu: Add vfio pci variant module
 for grace hopper

On Wed, 3 Jan 2024 12:57:27 -0400
Jason Gunthorpe <jgg@...dia.com> wrote:

> On Tue, Jan 02, 2024 at 09:10:01AM -0700, Alex Williamson wrote:
> 
> > Yes, it's possible to add support that these ranges honor the memory
> > enable bit, but it's not trivial and unfortunately even vfio-pci isn't
> > a great example of this.  
> 
> We talked about this already, the HW architects here confirm there is
> no issue with reset and memory enable. You will get all 1's on read
> and NOP on write. It doesn't need to implement VMA zap.

We talked about reset, I don't recall that we discussed that coherent
and uncached memory ranges masquerading as PCI BARs here honor the
memory enable bit in the command register.

> > around device reset or relative to the PCI command register.  The
> > variant driver becomes a trivial implementation that masks BARs 2 & 4
> > and exposes the ACPI range as a device specific region with only mmap
> > support.  QEMU can then map the device specific region into VM memory
> > and create an equivalent ACPI table for the guest.  
> 
> Well, no, probably not. There is an NVIDIA specification for how the
> vPCI function should be setup within the VM and it uses the BAR
> method, not the ACPI.

Is this specification available?  It's a shame we've gotten this far
without a reference to it.

> There are a lot of VMMs and OSs this needs to support so it must all
> be consistent. For better or worse the decision was taken for the vPCI
> spec to use BAR not ACPI, in part due to feedback from the broader VMM
> ecosystem, and informed by future product plans.
> 
> So, if vfio does special regions then qemu and everyone else has to
> fix it to meet the spec.

Great, this is the sort of justification and transparency that had not
been previously provided.  It is curious that only within the past
couple months the device ABI changed by adding the uncached BAR, so
this hasn't felt like a firm design.  Also I believe it's been stated
that the driver supports both the bare metal representation of the
device and this model where the coherent memory is mapped as a BAR, so
I'm not sure what obstacles remain or how we're positioned for future
products if take the bare metal approach.

> > I know Jason had described this device as effectively pre-CXL to
> > justify the coherent memory mapping, but it seems like there's still a
> > gap here that we can't simply hand wave that this PCI BAR follows a
> > different set of semantics.    
> 
> I thought all the meaningful differences are fixed now?
> 
> The main remaining issue seems to be around the config space
> emulation?

In the development of the virtio-vfio-pci variant driver we noted that
r/w access to the IO BAR didn't honor the IO bit in the command
register, which was quickly remedied and now returns -EIO if accessed
while disabled.  We were already adding r/w support to the coherent BAR
at the time as vfio doesn't have a means to express a region as only
having mmap support and precedent exists that BAR regions must support
these accesses.  So it was suggested that r/w accesses should also
honor the command register memory enable bit, but of course memory BARs
also support mmap, which snowballs into a much more complicated problem
than we have in the case of the virtio IO BAR.

So where do we go?  Do we continue down the path of emulating full PCI
semantics relative to these emulated BARs?  Does hardware take into
account the memory enable bit of the command register?  Do we
re-evaluate the BAR model for a device specific region?

> > We don't typically endorse complexity in the kernel only for the
> > purpose of avoiding work in userspace.  The absolute minimum should
> > be some justification of the design decision and analysis relative
> > to standard PCI behavior.  Thanks,  
> 
> If we strictly took that view in VFIO a lot of stuff wouldn't be here
> :)
> 
> I've made this argument before and gave up - the ecosystem wants to
> support multiple VMMs and the sanest path to do that is via VFIO
> kernel drivers that plug into existing vfio-pci support in the VMM
> ecosystem.
> 
> From a HW supplier perspective it is quite vexing to have to support
> all these different (and often proprietary!) VMM implementations. It
> is not just top of tree qemu.
> 
> If we instead did complex userspace drivers and userspace emulation of
> config space and so on then things like the IDXD SIOV support would
> look *very* different and not use VFIO at all. That would probably be
> somewhat better for security, but I was convinced it is a long and
> technically complex road.
> 
> At least with this approach the only VMM issue is the NUMA nodes, and
> as we have discussed that hackery is to make up for current Linux
> kernel SW limitations, not actually reflecting anything about the
> HW. If some other OS or future Linux doesn't require the ACPI NUMA
> nodes to create an OS visible NUMA object then the VMM will not
> require any changes.

Yes, I'm satisfied with where we've landed for the NUMA nodes and
generic initiator object.  It's an annoying constraint for management
tools but it's better than the original proposal where nodes
automatically popped into existence based on a vfio-pci device. 

There's certainly a balancing game of complexity in the driver vs
deferring the work to userspace.  From my perspective, I don't have a
good justification for why we're on the emulated BAR path when another
path looks a lot easier.  With the apparent increasing complexity of
emulating the memory enable semantics, I felt we needed to get a better
story there and really look at whether those semantics are worthwhile,
or perhaps as alluded, HW takes this into account (though I'm not sure
how).

I'd suggest we take a look at whether we need to continue to pursue
honoring the memory enable bit for these BARs and make a conscious and
documented decision if we choose to ignore it.  Ideally we could also
make this shared spec that we're implementing to available to the
community to justify the design decisions here.  In the case of
GPUDirect Cliques we had permission to post the spec to the list so it
could be archived to provide a stable link for future reference.
Thanks,

Alex


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ