[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250207030739.1649-1-yan.y.zhao@intel.com>
Date: Fri, 7 Feb 2025 11:07:39 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: pbonzini@...hat.com,
seanjc@...gle.com
Cc: rick.p.edgecombe@...el.com,
linux-kernel@...r.kernel.org,
kvm@...r.kernel.org,
Yan Zhao <yan.y.zhao@...el.com>
Subject: [PATCH 1/4] KVM: x86/mmu: Further check old SPTE is leaf for spurious prefetch fault
Instead of simply treating a prefetch fault as spurious when there's a
shadow-present old SPTE, further check if the old SPTE is leaf to determine
if a prefetch fault is spurious.
It's not reasonable to treat a prefetch fault as spurious when there's a
shadow-present non-leaf SPTE while without a shadow-present leaf SPTE.
e.g., with below sequence, a prefetch fault should not be regarded as
spurious:
1. add a memslot with size 4K
2. prefault GPA A in the memslot
3. delete the memslot (zap all disabled)
4. re-add the memslot with size 2M
5. prefault GPA A again.
In step 5, the prefetch fault attempts to install a 2M huge entry.
Since step 3 zaps the leaf SPTE for GPA A while keeping the non-leaf SPTE,
the leaf entry will remain empty after step 5 if the prefetch fault is
regarded as spurious due to a shadow-present non-leaf SPTE found.
Signed-off-by: Yan Zhao <yan.y.zhao@...el.com>
---
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/mmu/tdp_mmu.c | 3 ++-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index a45ae60e84ab..3d74e680006f 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2846,7 +2846,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
}
if (is_shadow_present_pte(*sptep)) {
- if (prefetch)
+ if (prefetch && is_last_spte(*sptep, level))
return RET_PF_SPURIOUS;
/*
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 046b6ba31197..ab65fd915ef2 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1137,7 +1137,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
if (WARN_ON_ONCE(sp->role.level != fault->goal_level))
return RET_PF_RETRY;
- if (fault->prefetch && is_shadow_present_pte(iter->old_spte))
+ if (fault->prefetch && is_shadow_present_pte(iter->old_spte) &&
+ is_last_spte(iter->old_spte, iter->level))
return RET_PF_SPURIOUS;
if (is_shadow_present_pte(iter->old_spte) &&
--
2.43.2
Powered by blists - more mailing lists