[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220325163815.3514-1-paskripkin@gmail.com>
Date: Fri, 25 Mar 2022 19:38:15 +0300
From: Pavel Skripkin <paskripkin@...il.com>
To: pbonzini@...hat.com, seanjc@...gle.com, vkuznets@...hat.com,
wanpengli@...cent.com, jmattson@...gle.com
Cc: x86@...nel.org, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Pavel Skripkin <paskripkin@...il.com>,
syzbot+717ed82268812a643b28@...kaller.appspotmail.com
Subject: [RFC PATCH] KVM: x86/mmu: fix general protection fault in kvm_mmu_uninit_tdp_mmu
Syzbot reported GPF in kvm_mmu_uninit_tdp_mmu(), which is caused by
passing NULL pointer to flush_workqueue().
tdp_mmu_zap_wq is allocated via alloc_workqueue() which may fail. There
is no error hanling and kvm_mmu_uninit_tdp_mmu() return value is simply
ignored. Even all kvm_*_init_vm() functions are void, so the easiest
solution is to check that tdp_mmu_zap_wq is valid pointer before passing
it somewhere.
Fixes: 22b94c4b63eb ("KVM: x86/mmu: Zap invalidated roots via asynchronous worker")
Reported-and-tested-by: syzbot+717ed82268812a643b28@...kaller.appspotmail.com
Signed-off-by: Pavel Skripkin <paskripkin@...il.com>
---
arch/x86/kvm/mmu/tdp_mmu.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index e7e7876251b3..b3e8ff7ac5b0 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -48,8 +48,10 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
if (!kvm->arch.tdp_mmu_enabled)
return;
- flush_workqueue(kvm->arch.tdp_mmu_zap_wq);
- destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);
+ if (kvm->arch.tdp_mmu_zap_wq) {
+ flush_workqueue(kvm->arch.tdp_mmu_zap_wq);
+ destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);
+ }
WARN_ON(!list_empty(&kvm->arch.tdp_mmu_pages));
WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots));
@@ -119,9 +121,11 @@ static void tdp_mmu_zap_root_work(struct work_struct *work)
static void tdp_mmu_schedule_zap_root(struct kvm *kvm, struct kvm_mmu_page *root)
{
- root->tdp_mmu_async_data = kvm;
- INIT_WORK(&root->tdp_mmu_async_work, tdp_mmu_zap_root_work);
- queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work);
+ if (kvm->arch.tdp_mmu_zap_wq) {
+ root->tdp_mmu_async_data = kvm;
+ INIT_WORK(&root->tdp_mmu_async_work, tdp_mmu_zap_root_work);
+ queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work);
+ }
}
static inline bool kvm_tdp_root_mark_invalid(struct kvm_mmu_page *page)
--
2.35.1
Powered by blists - more mailing lists