[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210305011101.3597423-18-seanjc@google.com>
Date: Thu, 4 Mar 2021 17:11:01 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Ben Gardon <bgardon@...gle.com>,
Brijesh Singh <brijesh.singh@....com>,
Tom Lendacky <thomas.lendacky@....com>
Subject: [PATCH v2 17/17] KVM: x86/mmu: WARN on NULL pae_root or lm_root, or
bad shadow root level
WARN if KVM is about to dereference a NULL pae_root or lm_root when
loading an MMU, and convert the BUG() on a bad shadow_root_level into a
WARN (now that errors are handled cleanly). With nested NPT, botching
the level and sending KVM down the wrong path is all too easy, and the
on-demand allocation of pae_root and lm_root means bugs crash the host.
Obviously, KVM could unconditionally allocate the roots, but that's
arguably a worse failure mode as it would potentially corrupt the guest
instead of crashing it.
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
arch/x86/kvm/mmu/mmu.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index bceff7d815c3..eb9dd8144fa5 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3253,6 +3253,9 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, true);
mmu->root_hpa = root;
} else if (shadow_root_level == PT32E_ROOT_LEVEL) {
+ if (WARN_ON_ONCE(!mmu->pae_root))
+ return -EIO;
+
for (i = 0; i < 4; ++i) {
WARN_ON_ONCE(mmu->pae_root[i]);
@@ -3262,8 +3265,10 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
shadow_me_mask;
}
mmu->root_hpa = __pa(mmu->pae_root);
- } else
- BUG();
+ } else {
+ WARN_ONCE(1, "Bad TDP root level = %d\n", shadow_root_level);
+ return -EIO;
+ }
/* root_pgd is ignored for direct MMUs. */
mmu->root_pgd = 0;
@@ -3307,6 +3312,9 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
goto set_root_pgd;
}
+ if (WARN_ON_ONCE(!mmu->pae_root))
+ return -EIO;
+
/*
* We shadow a 32 bit page table. This may be a legacy 2-level
* or a PAE 3-level page table. In either case we need to be aware that
@@ -3316,6 +3324,9 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) {
pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK;
+ if (WARN_ON_ONCE(!mmu->lm_root))
+ return -EIO;
+
mmu->lm_root[0] = __pa(mmu->pae_root) | pm_mask;
}
--
2.30.1.766.gb4fecdf3b7-goog
Powered by blists - more mailing lists