[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220204115718.14934-2-pbonzini@redhat.com>
Date: Fri, 4 Feb 2022 06:56:56 -0500
From: Paolo Bonzini <pbonzini@...hat.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: dmatlack@...gle.com, seanjc@...gle.com, vkuznets@...hat.com
Subject: [PATCH 01/23] KVM: MMU: pass uses_nx directly to reset_shadow_zero_bits_mask
reset_shadow_zero_bits_mask has a very unintuitive way of deciding
whether the shadow pages will use the NX bit. The function is used in
two cases, shadow paging and shadow NPT; shadow paging has a use for
EFER.NX and needs to force it enabled, while shadow NPT only needs it
depending on L1's setting.
The actual root problem here is that is_efer_nx, despite being part
of the "base" role, only matches the format of the shadow pages in the
NPT case. For now, just remove the ugly variable initialization and move
the call to reset_shadow_zero_bits_mask out of shadow_mmu_init_context.
The parameter can then be removed after the root problem in the role
is fixed.
Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
---
arch/x86/kvm/mmu/mmu.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 296f8723f9ae..9424ae90f1ef 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4410,18 +4410,9 @@ static inline u64 reserved_hpa_bits(void)
* follow the features in guest.
*/
static void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu,
- struct kvm_mmu *context)
+ struct kvm_mmu *context,
+ bool uses_nx)
{
- /*
- * KVM uses NX when TDP is disabled to handle a variety of scenarios,
- * notably for huge SPTEs if iTLB multi-hit mitigation is enabled and
- * to generate correct permissions for CR0.WP=0/CR4.SMEP=1/EFER.NX=0.
- * The iTLB multi-hit workaround can be toggled at any time, so assume
- * NX can be used by any non-nested shadow MMU to avoid having to reset
- * MMU contexts. Note, KVM forces EFER.NX=1 when TDP is disabled.
- */
- bool uses_nx = is_efer_nx(context) || !tdp_enabled;
-
/* @amd adds a check on bit of SPTEs, which KVM shouldn't use anyways. */
bool is_amd = true;
/* KVM doesn't use 2-level page tables for the shadow MMU. */
@@ -4829,8 +4820,6 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte
reset_guest_paging_metadata(vcpu, context);
context->shadow_root_level = new_role.base.level;
-
- reset_shadow_zero_bits_mask(vcpu, context);
}
static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu,
@@ -4841,6 +4830,16 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu,
kvm_calc_shadow_mmu_root_page_role(vcpu, regs, false);
shadow_mmu_init_context(vcpu, context, regs, new_role);
+
+ /*
+ * KVM uses NX when TDP is disabled to handle a variety of scenarios,
+ * notably for huge SPTEs if iTLB multi-hit mitigation is enabled and
+ * to generate correct permissions for CR0.WP=0/CR4.SMEP=1/EFER.NX=0.
+ * The iTLB multi-hit workaround can be toggled at any time, so assume
+ * NX can be used by any non-nested shadow MMU to avoid having to reset
+ * MMU contexts. Note, KVM forces EFER.NX=1 when TDP is disabled.
+ */
+ reset_shadow_zero_bits_mask(vcpu, context, true);
}
static union kvm_mmu_role
@@ -4872,6 +4871,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
__kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base);
shadow_mmu_init_context(vcpu, context, ®s, new_role);
+ reset_shadow_zero_bits_mask(vcpu, context, is_efer_nx(context));
}
EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu);
--
2.31.1
Powered by blists - more mailing lists