[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <42701fedbe10acf164ec56818b941061be6ffd4e.camel@redhat.com>
Date: Sat, 11 Dec 2021 08:56:23 +0200
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>
Cc: Lai Jiangshan <jiangshanlai@...il.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Lai Jiangshan <laijs@...ux.alibaba.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>,
Xiao Guangrong <guangrong.xiao@...ux.intel.com>
Subject: Re: [PATCH 17/15] KVM: X86: Ensure pae_root to be reconstructed for
shadow paging if the guest PDPTEs is changed
On Fri, 2021-12-10 at 21:07 +0000, Sean Christopherson wrote:
> On Thu, Dec 09, 2021, Paolo Bonzini wrote:
> > On 12/8/21 01:15, Sean Christopherson wrote:
> > > > @@ -832,8 +832,14 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3)
> > > > if (memcmp(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs))) {
> > > > memcpy(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs));
> > > > kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR);
> > > > - /* Ensure the dirty PDPTEs to be loaded. */
> > > > - kvm_make_request(KVM_REQ_LOAD_MMU_PGD, vcpu);
> > > > + /*
> > > > + * Ensure the dirty PDPTEs to be loaded for VMX with EPT
> > > > + * enabled or pae_root to be reconstructed for shadow paging.
> > > > + */
> > > > + if (tdp_enabled)
> > > > + kvm_make_request(KVM_REQ_LOAD_MMU_PGD, vcpu);
> > > > + else
> > > > + kvm_mmu_free_roots(vcpu, vcpu->arch.mmu, KVM_MMU_ROOT_CURRENT);
> > > Shouldn't matter since it's legacy shadow paging, but @mmu should be used instead
> > > of vcpu->arch.mmuvcpu->arch.mmu.
> >
> > In kvm/next actually there's no mmu parameter to load_pdptrs, so it's okay
> > to keep vcpu->arch.mmu.
> >
> > > To avoid a dependency on the previous patch, I think it makes sense to have this be:
> > >
> > > if (!tdp_enabled && memcmp(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs)))
> > > kvm_mmu_free_roots(vcpu, mmu, KVM_MMU_ROOT_CURRENT);
> > >
> > > before the memcpy().
> > >
> > > Then we can decide independently if skipping the KVM_REQ_LOAD_MMU_PGD if the
> > > PDPTRs are unchanged with respect to the MMU is safe.
> >
> > Do you disagree that there's already an invariant that the PDPTRs can only
> > be dirty if KVM_REQ_LOAD_MMU_PGD---and therefore a previous change to the
> > PDPTRs would have triggered KVM_REQ_LOAD_MMU_PGD?
>
> What I think is moot, because commit 24cd19a28cb7 ("KVM: X86: Update mmu->pdptrs
> only when it is changed") breaks nested VMs with EPT in L0 and PAE shadow paging
> in L2. Reproducing is trivial, just disable EPT in L1 and run a VM. I haven't
> investigating how it breaks things, because why it's broken is secondary for me.
>
> My primary concern is that we would even consider optimizing the PDPTR logic without
> a mountain of evidence that any patch is correct for all scenarios. We had to add
> an entire ioctl() just to get PDPTRs functional. This apparently wasn't validated
> against a simple use case, let alone against things like migration with nested VMs,
> multliple L2s, etc...
I did validate the *SREGS2* against all the cases I could (like migration, EPT/NPT disabled/etc.
I even started testing SMM to see how it affects PDPTRs, and patched seabios to use PAE paging.
I still could have missed something.
But note that qemu still doesn't use that ioctl (patch stuck in review).
Best regards,
Maxim Levitsky
>
Powered by blists - more mailing lists