[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210622175739.3610207-35-seanjc@google.com>
Date: Tue, 22 Jun 2021 10:57:19 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Yu Zhang <yu.c.zhang@...ux.intel.com>,
Maxim Levitsky <mlevitsk@...hat.com>
Subject: [PATCH 34/54] KVM: x86/mmu: Use MMU's roles to compute last non-leaf level
Use the MMU's role to get CR4.PSE when determining the last level at
which the guest _cannot_ create a non-leaf PTE, i.e. cannot create a
huge page.
Note, the existing logic is arguably wrong when considering 5-level
paging and the case where 1gb pages aren't supported. In practice, the
logic is confusing but not broken, because except for 32-bit non-PAE
paging, the PAGE_SIZE bit is reserved when a huge page isn't supported at
that level. I.e. PAGE_SIZE=1 will terminate the guest walk one way or
another. Furthermore, last_nonleaf_level is only consulted after KVM has
verified there are no reserved bits set.
All that confusion will be addressed in a future patch by dropping
last_nonleaf_level entirely. For now, massage the code to continue the
march toward using mmu_role for (almost) all MMU computations.
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
arch/x86/kvm/mmu/mmu.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index dcde7514358b..67aa19ab628d 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4504,12 +4504,12 @@ static void update_pkru_bitmask(struct kvm_mmu *mmu)
}
}
-static void update_last_nonleaf_level(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
+static void update_last_nonleaf_level(struct kvm_mmu *mmu)
{
unsigned root_level = mmu->root_level;
mmu->last_nonleaf_level = root_level;
- if (root_level == PT32_ROOT_LEVEL && is_pse(vcpu))
+ if (root_level == PT32_ROOT_LEVEL && is_cr4_pse(mmu))
mmu->last_nonleaf_level++;
}
@@ -4666,7 +4666,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
update_permission_bitmask(context, false);
update_pkru_bitmask(context);
- update_last_nonleaf_level(vcpu, context);
+ update_last_nonleaf_level(context);
reset_tdp_shadow_zero_bits_mask(vcpu, context);
}
@@ -4724,7 +4724,7 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte
reset_rsvds_bits_mask(vcpu, context);
update_permission_bitmask(context, false);
update_pkru_bitmask(context);
- update_last_nonleaf_level(vcpu, context);
+ update_last_nonleaf_level(context);
}
context->shadow_root_level = new_role.base.level;
@@ -4831,7 +4831,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
context->direct_map = false;
update_permission_bitmask(context, true);
- update_last_nonleaf_level(vcpu, context);
+ update_last_nonleaf_level(context);
update_pkru_bitmask(context);
reset_rsvds_bits_mask_ept(vcpu, context, execonly);
reset_ept_shadow_zero_bits_mask(vcpu, context, execonly);
@@ -4929,7 +4929,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu)
update_permission_bitmask(g_context, false);
update_pkru_bitmask(g_context);
- update_last_nonleaf_level(vcpu, g_context);
+ update_last_nonleaf_level(g_context);
}
void kvm_init_mmu(struct kvm_vcpu *vcpu)
--
2.32.0.288.g62a8d224e6-goog
Powered by blists - more mailing lists