lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHTA-ubXiDePmfgTdPbg144tHmRZR8=2cNshcL5tMkoMXdyn_Q@mail.gmail.com>
Date: Tue, 26 Nov 2024 16:18:26 -0600
From: Mitchell Augustin <mitchell.augustin@...onical.com>
To: Alex Williamson <alex.williamson@...hat.com>
Cc: linux-pci@...r.kernel.org, kvm@...r.kernel.org, 
	Bjorn Helgaas <bhelgaas@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: drivers/pci: (and/or KVM): Slow PCI initialization during VM boot
 with passthrough of large BAR Nvidia GPUs on DGX H100

Thanks Alex,

> The VM needs to be given enough 64-bit MMIO space for the devices, at which point the BIOS should be able to fully assign the BARs and then pci=nocrs,realloc should not be necessary

Understood, this is what I have observed as well - pci=nocrs,realloc
is not needed when OVMF advertises a large enough MMIO window.
(initial BAR assignment is correct, just slow to generate.)

> Are you seeing __pci_read_base() called 20-40 times repeatedly for a given device or in total for the VM boot.  Minimally it needs to be called once for every PCI device, so it's not clear to me if you're reporting something excessive based on using pci=realloc or there are just enough devices to justify this many calls.

This is total, for the entire VM boot, and this happens even without
pci=realloc. In these tests, I was passing through 4 GPUs to the
guest, and each of those had 3 BARs (albeit only one per GPU that is
128GB).

> If 5s is the measure of enabling one GPU, the VM BIOS will need to do that at least once and the guest OS will need to do that at least once, we're up to 4 GPUs * 5s * 2 times = 40s

5s was more of an extreme upper bound on wall clock time. Really, it's
more like 1.5 seconds wall clock time per GPU. (and I was able to get
hotplug working this morning on a VM where the GPUs weren't initially
there at boot, and observed the same.) In my tests, the cumulative
time when doing all 4-8 of these GPU passthroughs after boot using
hotplug is marginal (like, under 20 seconds cumulatively), which is
drastically better than the 1-3 extra minutes of wall clock time for
those init stages when the GPUs are attached at boot. Also, neither of
these are factoring in the time that it takes for the VM to allocate
the entire 900GB guest memory, as that stays the same in all of my
configs and is thus not a concern for me. (the only difference is that
it occurs at first hotplug when I'm hotplugging, vs at boot time when
GPUs are there at start.)

> there's no indication I can find in virt-install command of using hugepages

I just gave this a test with 1G hugepages used as the backing store
for the VM memory, and it doesn't impact the GPU initialization time.

> The BAR space is walked, faulted, and mapped.  I'm sure you're at least experiencing scaling issues of that with 128GB BARs.

The part that is strange to me is that I don't see the initialization
slowdown at all when the GPUs are hotplugged after boot completes.
Isn't what you describe here also happening during the hotplugging
process, or is it different in some way?

> I know there's room to improve the former. We just recently added kernel support for pgd and pmd large page mapping for pfnmap regions and QEMU 9.2-rc includes alignment of vfio-pci BARs to try to optimize that towards pgd where possible.  We're still trying to pin individual pages though, so percolating the mapping sizes up through the vfio IOMMU backend could help.

Good to hear that there is active work being done to speed this up.
FWIW though, I did just try this out with the 6.12 kernel in my guest
and host with qemu-9.2.0-rc1, and I do not see any improvement in the
PCI init time when my devices are attached during boot.

Thanks,

On Tue, Nov 26, 2024 at 11:34 AM Alex Williamson
<alex.williamson@...hat.com> wrote:
>
> On Mon, 25 Nov 2024 16:46:29 -0600
> Mitchell Augustin <mitchell.augustin@...onical.com> wrote:
>
> > Hello,
> >
> > I've been working on a bug regarding slow PCI initialization and BAR
> > assignment times for Nvidia GPUs passed-through to VMs on our DGX H100
> > that I originally believed to be an issue in OVMF, but upon further
> > investigation, I'm now suspecting that it may be an issue somewhere in
> > the kernel. (Here is the original edk2 mailing list thread, with extra
> > context: https://edk2.groups.io/g/devel/topic/109651206) [0]
> >
> >
> > When running the 6.12 kernel on a DGX H100 host with 4 GPUs passed
> > through using CPU passthrough and this virt-install command[1], VMs
> > using the latest OVMF version will take around 2 minutes for the guest
> > kernel to boot and initialize PCI devices/BARs for the GPUs.
> > Originally, I was investigating this as an issue in OVMF, because GPU
> > initialization takes much less time when our host is running an OVMF
> > version with this patch[2] removed (which only calculates the MMIO
> > window size differently). Without that patch, the guest kernel does
> > boot quickly, but we can only use the Nvidia GPUs within the guest if
> > `pci=nocrs pci=realloc` are set in the guest (evidently since the MMIO
> > windows advertised by OVMF to the kernel without this patch are
> > incorrect). So, the OVMF patch being present does evidently result in
> > correct MMIO windows and prevent us from needing those kernel options,
> > but the VM boot time is much slower.
> >
> >
> > In discussing this, another contributor reported slow PCIe/BAR
> > initialization times for large BAR Nvidia GPUs in Linux when using VMs
> > with SeaBIOS as well. This, combined with me not seeing any slowness
> > when these GPUs are initialized on the host, and the fact that this
> > slowness only happens when CPU passthrough is used, are leading me to
> > suspect that this may actually be a problem somewhere in the KVM or
> > vfio-pci stack. I did also attempt manually setting different MMIO
> > window sizes using the X-PciMmio64Mb OVMF/QEMU knob, and it seems that
> > any MMIO window size large enough to accommodate all GPU memory
> > regions does result in this slower initialization time (but also a
> > valid mapping).
>
> The VM needs to be given enough 64-bit MMIO space for the devices, at
> which point the BIOS should be able to fully assign the BARs and then
> pci=nocrs,realloc should not be necessary.
>
> > I did some profiling of the guest kernel during boot, and I've
> > identified that it seems like the most time is spent in this
> > pci_write_config_word() call in __pci_read_base() of
> > drivers/pci/probe.c.[3] Each of those pci_write_config_word() calls
> > takes about 2 seconds, but it adds up to a significant chunk of the
> > initialization time since __pci_read_base() is executed somewhere
> > between 20-40 times in my VM boot.
>
> A lot happens in the VMM when the memory enable bit is set in the
> command register.  This is when the MMIO BARs of the device enter the
> AddressSpace in QEMU and are caught by the MemoryListener to create DMA
> mappings though the IOMMU.  The BAR space is walked, faulted, and
> mapped.  I'm sure you're at least experiencing scaling issues of that
> with 128GB BARs.
>
> Are you seeing __pci_read_base() called 20-40 times repeatedly for a
> given device or in total for the VM boot.  Minimally it needs to be
> called once for every PCI device, so it's not clear to me if you're
> reporting something excessive based on using pci=realloc or there are
> just enough devices to justify this many calls.
>
> > As a point of comparison, I measured the time it took to hot-unplug
> > and re-plug these GPUs after the VM booted, and observed much more
> > reasonable times (under 5s for each GPU to re-initialize its memory
> > regions). I've also been trying to get this hotplugging working in VMs
> > where the GPUs aren't initially attached at boot, but in any such
> > configuration, the memory regions for the PCI slots that the GPUs get
> > attached to during hotplug are too small for the full 128GB these GPUs
> > require (and I have yet to figure out a way to fix that. More details
> > on that in [0]).
>
> If 5s is the measure of enabling one GPU, the VM BIOS will need to do
> that at least once and the guest OS will need to do that at least once,
> we're up to 4 GPUs * 5s * 2 times = 40s.  If the guest OS toggles
> memory enable more than once, your 2 minute boot time isn't sounding
> too far off.  Then there's also the fact that the VM appears to be
> given 900+GB of RAM, which also needs to be pinned and mapped for DMA
> and there's no indication I can find in virt-install command of using
> hugepages.
>
> There are essentially two things in play here, how long does it take to
> map that BAR into the VM address space, and how many times is it done.
> I know there's room to improve the former.  We just recently added
> kernel support for pgd and pmd large page mapping for pfnmap regions
> and QEMU 9.2-rc includes alignment of vfio-pci BARs to try to optimize
> that towards pgd where possible.  We're still trying to pin individual
> pages though, so percolating the mapping sizes up through the vfio
> IOMMU backend could help.  Thanks,
>
> Alex
>


-- 
Mitchell Augustin
Software Engineer - Ubuntu Partner Engineering

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ