[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210823151549.rkkrktvtpu6yapmd@weiserver.amd.com>
Date: Mon, 23 Aug 2021 10:15:49 -0500
From: Wei Huang <wei.huang2@....com>
To: Maxim Levitsky <mlevitsk@...hat.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, vkuznets@...hat.com,
seanjc@...gle.com, wanpengli@...cent.com, jmattson@...gle.com,
joro@...tes.org, tglx@...utronix.de, mingo@...hat.com,
bp@...en8.de, x86@...nel.org, hpa@...or.com
Subject: Re: [PATCH v3 0/3] SVM 5-level page table support
On 08/23 12:20, Maxim Levitsky wrote:
> On Thu, 2021-08-19 at 18:43 +0200, Paolo Bonzini wrote:
> > On 18/08/21 18:55, Wei Huang wrote:
> > > This patch set adds 5-level page table support for AMD SVM. When the
> > > 5-level page table is enabled on host OS, the nested page table for guest
> > > VMs will use the same format as host OS (i.e. 5-level NPT). These patches
> > > were tested with various combination of different settings and test cases
> > > (nested/regular VMs, AMD64/i686 kernels, kvm-unit-tests, etc.)
> > >
> > > v2->v3:
> > > * Change the way of building root_hpa by following the existing flow (Sean)
> > >
> > > v1->v2:
> > > * Remove v1's arch-specific get_tdp_level() and add a new parameter,
> > > tdp_forced_root_level, to allow forced TDP level (Sean)
> > > * Add additional comment on tdp_root table chaining trick and change the
> > > PML root table allocation code (Sean)
> > > * Revise Patch 1's commit msg (Sean and Jim)
> > >
> > > Thanks,
> > > -Wei
> > >
> > > Wei Huang (3):
> > > KVM: x86: Allow CPU to force vendor-specific TDP level
> > > KVM: x86: Handle the case of 5-level shadow page table
> > > KVM: SVM: Add 5-level page table support for SVM
> > >
> > > arch/x86/include/asm/kvm_host.h | 6 ++--
> > > arch/x86/kvm/mmu/mmu.c | 56 ++++++++++++++++++++++-----------
> > > arch/x86/kvm/svm/svm.c | 13 ++++----
> > > arch/x86/kvm/vmx/vmx.c | 3 +-
> > > 4 files changed, 49 insertions(+), 29 deletions(-)
> > >
> >
> > Queued, thanks, with NULL initializations according to Tom's review.
> >
> > Paolo
> >
>
> Hi,
> Yesterday while testing my SMM patches, I noticed a minor issue:
> It seems that this patchset breaks my 32 bit nested VM testcase with NPT=0.
>
Could you elaborate the detailed setup? NPT=0 for KVM running on L1?
Which VM is 32bit - L1 or L2?
Thanks,
-Wei
> This hack makes it work again for me (I don't yet use TDP mmu).
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index caa3f9aee7d1..c25e0d40a620 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3562,7 +3562,7 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu)
> mmu->shadow_root_level < PT64_ROOT_4LEVEL)
> return 0;
>
> - if (mmu->pae_root && mmu->pml4_root && mmu->pml5_root)
> + if (mmu->pae_root && mmu->pml4_root)
> return 0;
>
> /*
>
>
>
> Best regards,
> Maxim Levitsky
>
Powered by blists - more mailing lists