lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240209085531.73f25a98.alex.williamson@redhat.com>
Date: Fri, 9 Feb 2024 08:55:31 -0700
From: Alex Williamson <alex.williamson@...hat.com>
To: Ankit Agrawal <ankita@...dia.com>
Cc: "Tian, Kevin" <kevin.tian@...el.com>, Jason Gunthorpe <jgg@...dia.com>,
 Yishai Hadas <yishaih@...dia.com>, "mst@...hat.com" <mst@...hat.com>,
 "shameerali.kolothum.thodi@...wei.com"
 <shameerali.kolothum.thodi@...wei.com>, "clg@...hat.com" <clg@...hat.com>,
 "oleksandr@...alenko.name" <oleksandr@...alenko.name>, "K V P,
 Satyanarayana" <satyanarayana.k.v.p@...el.com>, "eric.auger@...hat.com"
 <eric.auger@...hat.com>, "brett.creeley@....com" <brett.creeley@....com>,
 "horms@...nel.org" <horms@...nel.org>, Rahul Rameshbabu
 <rrameshbabu@...dia.com>, Aniket Agashe <aniketa@...dia.com>, Neo Jia
 <cjia@...dia.com>, Kirti Wankhede <kwankhede@...dia.com>, "Tarun Gupta
 (SW-GPU)" <targupta@...dia.com>, Vikram Sethi <vsethi@...dia.com>, Andy
 Currid <acurrid@...dia.com>, Alistair Popple <apopple@...dia.com>, John
 Hubbard <jhubbard@...dia.com>, Dan Williams <danw@...dia.com>, "Anuj
 Aggarwal (SW-GPU)" <anuaggarwal@...dia.com>, Matt Ochs <mochs@...dia.com>,
 "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "linux-kernel@...r.kernel.org"
 <linux-kernel@...r.kernel.org>, "virtualization@...ts.linux-foundation.org"
 <virtualization@...ts.linux-foundation.org>
Subject: Re: [PATCH v17 3/3] vfio/nvgrace-gpu: Add vfio pci variant module
 for grace hopper

On Fri, 9 Feb 2024 09:20:22 +0000
Ankit Agrawal <ankita@...dia.com> wrote:

> Thanks Kevin for the review. Comments inline.
> 
> >>
> >> Note that the usemem memory is added by the VM Nvidia device driver [5]
> >> to the VM kernel as memblocks. Hence make the usable memory size
> >> memblock
> >> aligned.  
> >
> > Is memblock size defined in spec or purely a guest implementation choice?  
> 
> The MEMBLOCK value is a hardwired and a constant ABI value between the GPU
> FW and VFIO driver.
> 
> >>
> >> If the bare metal properties are not present, the driver registers the
> >> vfio-pci-core function pointers.  
> >
> > so if qemu doesn't generate such property the variant driver running
> > inside guest will always go to use core functions and guest vfio userspace
> > will observe both resmem and usemem bars. But then there is nothing
> > in field to prohibit mapping resmem bar as cacheable.
> >
> > should this driver check the presence of either ACPI property or
> > resmem/usemem bars to enable variant function pointers?  
> 
> Maybe I am missing something here; but if the ACPI property is absent,
> the real physical BARs present on the device will be exposed by the
> vfio-pci-core functions to the VM. So I think if the variant driver is ran
> within the VM, it should not see the fake usemem and resmem BARs.

There are two possibilities here, either we're assigning the pure
physical device from a host that does not have the ACPI properties or
we're performing a nested assignment.  In the former case we're simply
passing along the unmodified physical BARs.  In the latter case we're
actually passing through the fake BARs, the virtualization of the
device has already happened in the level 1 assignment.

I think Kevin's point is also relative to this latter scenario, in the
L1 instance of the nvgrace-gpu driver the mmap of the usemem BAR is
cachable, but in the L2 instance of the driver where we only use the
vfio-pci-core ops nothing maintains that cachable mapping.  Is that a
problem?  An uncached mapping on top of a cachable mapping is often
prone to problems.  Thanks,

Alex


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ