[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191106170727.14457-3-sean.j.christopherson@intel.com>
Date: Wed, 6 Nov 2019 09:07:27 -0800
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Adam Borowski <kilobyte@...band.pl>,
David Hildenbrand <david@...hat.com>,
Dan Williams <dan.j.williams@...el.com>
Subject: [PATCH 2/2] KVM: x86/mmu: Add helper to consolidate huge page promotion
Add a helper, kvm_is_hugepage_allowed(), to consolidate nearly identical
code between transparent_hugepage_adjust() and
kvm_mmu_zap_collapsible_spte().
Note, this effectively adds a superfluous is_error_noslot_pfn() check in
kvm_mmu_zap_collapsible_spte(), but the overhead added is essentially a
single uop on modern hardware and taking the hit allows all the PFN
related checks to be encapsulated in kvm_is_hugepage_allowed().
No functional change intended.
Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
---
arch/x86/kvm/mmu.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index bf82b1f2e834..8565653fce22 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3292,6 +3292,13 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn)
return -EFAULT;
}
+static bool kvm_is_hugepage_allowed(kvm_pfn_t pfn)
+{
+ return !is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) &&
+ !kvm_is_zone_device_pfn(pfn) &&
+ PageTransCompoundMap(pfn_to_page(pfn));
+}
+
static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
gfn_t gfn, kvm_pfn_t *pfnp,
int *levelp)
@@ -3305,9 +3312,7 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
* PT_PAGE_TABLE_LEVEL and there would be no adjustment done
* here.
*/
- if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) &&
- !kvm_is_zone_device_pfn(pfn) && level == PT_PAGE_TABLE_LEVEL &&
- PageTransCompoundMap(pfn_to_page(pfn)) &&
+ if (level == PT_PAGE_TABLE_LEVEL && kvm_is_hugepage_allowed(pfn) &&
!mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) {
unsigned long mask;
/*
@@ -5914,9 +5919,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
* the guest, and the guest page table is using 4K page size
* mapping if the indirect sp has level = 1.
*/
- if (sp->role.direct && !kvm_is_reserved_pfn(pfn) &&
- !kvm_is_zone_device_pfn(pfn) &&
- PageTransCompoundMap(pfn_to_page(pfn))) {
+ if (sp->role.direct && kvm_is_hugepage_allowed(pfn)) {
pte_list_remove(rmap_head, sptep);
if (kvm_available_flush_tlb_with_range())
--
2.24.0
Powered by blists - more mailing lists