[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230216154115.710033-4-jiangshanlai@gmail.com>
Date: Thu, 16 Feb 2023 23:41:09 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: linux-kernel@...r.kernel.org
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Lai Jiangshan <jiangshan.ljs@...group.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, kvm@...r.kernel.org
Subject: [PATCH V3 03/14] kvm: x86/mmu: Check mmu->sync_page pointer in kvm_sync_page_check()
From: Lai Jiangshan <jiangshan.ljs@...group.com>
Check the pointer before calling it to catch any possible mistake.
Signed-off-by: Lai Jiangshan <jiangshan.ljs@...group.com>
---
arch/x86/kvm/mmu/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index ee2837ea18d4..69ab0d1bb0ec 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1940,7 +1940,7 @@ static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
* differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the
* reserved bits checks will be wrong, etc...
*/
- if (WARN_ON_ONCE(sp->role.direct ||
+ if (WARN_ON_ONCE(sp->role.direct || !vcpu->arch.mmu->sync_page ||
(sp->role.word ^ root_role.word) & ~sync_role_ign.word))
return false;
--
2.19.1.6.gb485710b
Powered by blists - more mailing lists