[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8b7129ed-0377-7b91-c741-44ac2202081a@redhat.com>
Date: Thu, 8 Apr 2021 17:57:54 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Wanpeng Li <kernellwp@...il.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Ben Gardon <bgardon@...gle.com>,
Brijesh Singh <brijesh.singh@....com>,
Tom Lendacky <thomas.lendacky@....com>
Subject: Re: [PATCH v2 07/17] KVM: x86/mmu: Check PDPTRs before allocating PAE
roots
On 08/04/21 17:48, Sean Christopherson wrote:
> Freaking PDPTRs. I was really hoping we could keep the lock and pages_available()
> logic outside of the helpers. What if kvm_mmu_load() reads the PDPTRs and
> passes them into mmu_alloc_shadow_roots()? Or is that too ugly?
The patch I have posted (though untested) tries to do that in a slightly
less ugly way by pushing make_mmu_pages_available down to mmu_alloc_*_roots.
Paolo
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index efb41f31e80a..e3c4938cd665 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3275,11 +3275,11 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
> return 0;
> }
>
> -static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
> +static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu, u64 pdptrs[4])
> {
> struct kvm_mmu *mmu = vcpu->arch.mmu;
> - u64 pdptrs[4], pm_mask;
> gfn_t root_gfn, root_pgd;
> + u64 pm_mask;
> hpa_t root;
> int i;
>
> @@ -3291,11 +3291,8 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
>
> if (mmu->root_level == PT32E_ROOT_LEVEL) {
> for (i = 0; i < 4; ++i) {
> - pdptrs[i] = mmu->get_pdptr(vcpu, i);
> - if (!(pdptrs[i] & PT_PRESENT_MASK))
> - continue;
> -
> - if (mmu_check_root(vcpu, pdptrs[i] >> PAGE_SHIFT))
> + if ((pdptrs[i] & PT_PRESENT_MASK) &&
> + mmu_check_root(vcpu, pdptrs[i] >> PAGE_SHIFT))
> return 1;
> }
> }
> @@ -4844,21 +4841,33 @@ EXPORT_SYMBOL_GPL(kvm_mmu_reset_context);
>
> int kvm_mmu_load(struct kvm_vcpu *vcpu)
> {
> - int r;
> + struct kvm_mmu *mmu = vcpu->arch.mmu;
> + u64 pdptrs[4];
> + int r, i;
>
> - r = mmu_topup_memory_caches(vcpu, !vcpu->arch.mmu->direct_map);
> + r = mmu_topup_memory_caches(vcpu, !mmu->direct_map);
> if (r)
> goto out;
> r = mmu_alloc_special_roots(vcpu);
> if (r)
> goto out;
> +
> + /*
> + * On SVM, reading PDPTRs might access guest memory, which might fault
> + * and thus might sleep. Grab the PDPTRs before acquiring mmu_lock.
> + */
> + if (!mmu->direct_map && mmu->root_level == PT32E_ROOT_LEVEL) {
> + for (i = 0; i < 4; ++i)
> + pdptrs[i] = mmu->get_pdptr(vcpu, i);
> + }
> +
> write_lock(&vcpu->kvm->mmu_lock);
> if (make_mmu_pages_available(vcpu))
> r = -ENOSPC;
> else if (vcpu->arch.mmu->direct_map)
> r = mmu_alloc_direct_roots(vcpu);
> else
> - r = mmu_alloc_shadow_roots(vcpu);
> + r = mmu_alloc_shadow_roots(vcpu, pdptrs);
> write_unlock(&vcpu->kvm->mmu_lock);
> if (r)
> goto out;
>
Powered by blists - more mailing lists