[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9d982f1a5e3b57780445aadd08fcb5315f72cab9.camel@redhat.com>
Date: Mon, 23 Aug 2021 21:06:51 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Wei Huang <wei.huang2@....com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, vkuznets@...hat.com,
seanjc@...gle.com, wanpengli@...cent.com, jmattson@...gle.com,
joro@...tes.org, tglx@...utronix.de, mingo@...hat.com,
bp@...en8.de, x86@...nel.org, hpa@...or.com
Subject: Re: [PATCH v3 0/3] SVM 5-level page table support
On Mon, 2021-08-23 at 10:15 -0500, Wei Huang wrote:
> On 08/23 12:20, Maxim Levitsky wrote:
> > On Thu, 2021-08-19 at 18:43 +0200, Paolo Bonzini wrote:
> > > On 18/08/21 18:55, Wei Huang wrote:
> > > > This patch set adds 5-level page table support for AMD SVM. When the
> > > > 5-level page table is enabled on host OS, the nested page table for guest
> > > > VMs will use the same format as host OS (i.e. 5-level NPT). These patches
> > > > were tested with various combination of different settings and test cases
> > > > (nested/regular VMs, AMD64/i686 kernels, kvm-unit-tests, etc.)
> > > >
> > > > v2->v3:
> > > > * Change the way of building root_hpa by following the existing flow (Sean)
> > > >
> > > > v1->v2:
> > > > * Remove v1's arch-specific get_tdp_level() and add a new parameter,
> > > > tdp_forced_root_level, to allow forced TDP level (Sean)
> > > > * Add additional comment on tdp_root table chaining trick and change the
> > > > PML root table allocation code (Sean)
> > > > * Revise Patch 1's commit msg (Sean and Jim)
> > > >
> > > > Thanks,
> > > > -Wei
> > > >
> > > > Wei Huang (3):
> > > > KVM: x86: Allow CPU to force vendor-specific TDP level
> > > > KVM: x86: Handle the case of 5-level shadow page table
> > > > KVM: SVM: Add 5-level page table support for SVM
> > > >
> > > > arch/x86/include/asm/kvm_host.h | 6 ++--
> > > > arch/x86/kvm/mmu/mmu.c | 56 ++++++++++++++++++++++-----------
> > > > arch/x86/kvm/svm/svm.c | 13 ++++----
> > > > arch/x86/kvm/vmx/vmx.c | 3 +-
> > > > 4 files changed, 49 insertions(+), 29 deletions(-)
> > > >
> > >
> > > Queued, thanks, with NULL initializations according to Tom's review.
> > >
> > > Paolo
> > >
> >
> > Hi,
> > Yesterday while testing my SMM patches, I noticed a minor issue:
> > It seems that this patchset breaks my 32 bit nested VM testcase with NPT=0.
> >
>
> Could you elaborate the detailed setup? NPT=0 for KVM running on L1?
> Which VM is 32bit - L1 or L2?
NPT=0, L1 and L2 were 32 bit PAE VMs. The test was done to see how well
this setup deals with SMM mode entry/exits with SMM generated by L1 guest,
and see if I have any PDPTR related shenanigans.
I disabled the TDP MMU for now, although in this setup it won't be used anyway.
BIOS was seabios, patched to use PAE itself during bootm, as well in SMM.
(from https://mail.coreboot.org/pipermail/seabios/2015-September/009788.html, patch applied by hand)
Failure was immediate without my hack - L1 died as soon as L2 was started due to an assert in
this code.
Best regards,
Maxim Levitsky
>
> Thanks,
> -Wei
>
> > This hack makes it work again for me (I don't yet use TDP mmu).
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index caa3f9aee7d1..c25e0d40a620 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -3562,7 +3562,7 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu)
> > mmu->shadow_root_level < PT64_ROOT_4LEVEL)
> > return 0;
> >
> > - if (mmu->pae_root && mmu->pml4_root && mmu->pml5_root)
> > + if (mmu->pae_root && mmu->pml4_root)
> > return 0;
> >
> > /*
> >
> >
> >
> > Best regards,
> > Maxim Levitsky
> >
Powered by blists - more mailing lists