[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210924163152.289027-17-pbonzini@redhat.com>
Date: Fri, 24 Sep 2021 12:31:37 -0400
From: Paolo Bonzini <pbonzini@...hat.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: dmatlack@...gle.com, seanjc@...gle.com,
Lai Jiangshan <jiangshanlai@...il.com>,
Lai Jiangshan <laijs@...ux.alibaba.com>
Subject: [PATCH v3 16/31] KVM: x86/mmu: Verify shadow walk doesn't terminate early in page faults
From: Sean Christopherson <seanjc@...gle.com>
WARN and bail if the shadow walk for faulting in a SPTE terminates early,
i.e. doesn't reach the expected level because the walk encountered a
terminal SPTE. The shadow walks for page faults are subtle in that they
install non-leaf SPTEs (zapping leaf SPTEs if necessary!) in the loop
body, and consume the newly created non-leaf SPTE in the loop control,
e.g. __shadow_walk_next(). In other words, the walks guarantee that the
walk will stop if and only if the target level is reached by installing
non-leaf SPTEs to guarantee the walk remains valid.
Opportunistically use fault->goal-level instead of it.level in
FNAME(fetch) to further clarify that KVM always installs the leaf SPTE at
the target level.
Reviewed-by: Lai Jiangshan <jiangshanlai@...il.com>
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com>
Message-Id: <20210906122547.263316-1-jiangshanlai@...il.com>
Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
---
arch/x86/kvm/mmu/mmu.c | 3 +++
arch/x86/kvm/mmu/paging_tmpl.h | 7 +++++--
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 5ba0a844f576..2ddbabad5bd2 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3012,6 +3012,9 @@ static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
account_huge_nx_page(vcpu->kvm, sp);
}
+ if (WARN_ON_ONCE(it.level != fault->goal_level))
+ return -EFAULT;
+
ret = mmu_set_spte(vcpu, it.sptep, ACC_ALL,
fault->write, fault->goal_level, base_gfn, fault->pfn,
fault->prefault, fault->map_writable);
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 6bc0dbc0baff..7a8a2d14a3c7 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -760,9 +760,12 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
}
}
+ if (WARN_ON_ONCE(it.level != fault->goal_level))
+ return -EFAULT;
+
ret = mmu_set_spte(vcpu, it.sptep, gw->pte_access, fault->write,
- it.level, base_gfn, fault->pfn, fault->prefault,
- fault->map_writable);
+ fault->goal_level, base_gfn, fault->pfn,
+ fault->prefault, fault->map_writable);
if (ret == RET_PF_SPURIOUS)
return ret;
--
2.27.0
Powered by blists - more mailing lists