[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZiBMnHoyMsoRhLAL@google.com>
Date: Wed, 17 Apr 2024 15:26:36 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
isaku.yamahata@...el.com, xiaoyao.li@...el.com, binbin.wu@...ux.intel.com,
rick.p.edgecombe@...el.com
Subject: Re: [PATCH 5/7] KVM: x86/mmu: Introduce kvm_tdp_map_page() to
populate guest memory
On Wed, Apr 17, 2024, Paolo Bonzini wrote:
> On Wed, Apr 17, 2024 at 11:24 PM Sean Christopherson <seanjc@...gle.com> wrote:
> > Do we want to restrict this to the TDP MMU? Not for any particular reason,
> > mostly just to keep moving towards officially deprecating/removing TDP
> > support from the shadow MMU.
>
> Heh, yet another thing I briefly thought about while reviewing Isaku's
> work. In the end I decided that, with the implementation being just a
> regular prefault, there's not much to save from keeping the shadow MMU
> away from this.
Yeah.
> The real ugly part is that if the memslots are zapped the
> pre-population effect basically goes away (damn
> kvm_arch_flush_shadow_memslot).
Ah, the eternal thorn in my side.
> This is the reason why I initially thought of KVM_CHECK_EXTENSION for the VM
> file descriptor, to only allow this for TDX VMs.
I'm fairly certain memslot deletion is mostly a QEMU specific problem. Allegedly
(I haven't verified), our userspace+firmware doesn't delete any memslots during
boot.
And it might even be solvable for QEMU, at least for some configurations. E.g.
during boot, my QEMU+OVMF setup creates and deletes the SMRAM memslot (despite my
KVM build not supporting SMM), and deletes the lower RAM memslot when relocating
BIOS. The SMRAM is definitely solvable, and the BIOS relocation stuff seems like
it's solvable too.
Powered by blists - more mailing lists