[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220321002638.379672-4-mizhang@google.com>
Date: Mon, 21 Mar 2022 00:26:37 +0000
From: Mingwei Zhang <mizhang@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Ben Gardon <bgardon@...gle.com>,
Mingwei Zhang <mizhang@...gle.com>,
David Matlack <dmatlack@...gle.com>,
Jing Zhang <jingzhangos@...gle.com>,
Peter Xu <peterx@...hat.com>, Ben Gardon <bgorden@...gle.com>
Subject: [PATCH 3/4] KVM: x86/mmu: explicitly check nx_hugepage in disallowed_hugepage_adjust()
Add extra check to specify the case of nx hugepage and allow KVM to
reconstruct large mapping after dirty logging is disabled. Existing code
works only for nx hugepage but the condition is too general in that does
not consider other usage case (such as dirty logging). Moreover, existing
code assumes that a present PMD or PUD indicates that there exist 'smaller
SPTEs' under the paging structure. This assumption may no be true if
consider the zapping leafs only behavior in MMU.
Missing the check causes KVM incorrectly regards the faulting page as a NX
huge page and refuse to map it at desired level. And this leads to back
performance in shadow mmu and potentiall TDP mmu.
Fixes: b8e8c8303ff2 ("kvm: mmu: ITLB_MULTIHIT mitigation")
Cc: stable@...r.kernel.org
Reviewed-by: Ben Gardon <bgardon@...gle.com>
Signed-off-by: Mingwei Zhang <mizhang@...gle.com>
---
arch/x86/kvm/mmu/mmu.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 5628d0ba637e..4d358c273f6c 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2919,6 +2919,16 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_
cur_level == fault->goal_level &&
is_shadow_present_pte(spte) &&
!is_large_pte(spte)) {
+ struct kvm_mmu_page *sp;
+ u64 page_mask;
+ /*
+ * When nx hugepage flag is not set, there is no reason to
+ * go down to another level. This helps demand paging to
+ * generate large mappings.
+ */
+ sp = to_shadow_page(spte & PT64_BASE_ADDR_MASK);
+ if (!sp->lpage_disallowed)
+ return;
/*
* A small SPTE exists for this pfn, but FNAME(fetch)
* and __direct_map would like to create a large PTE
@@ -2926,8 +2936,8 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_
* patching back for them into pfn the next 9 bits of
* the address.
*/
- u64 page_mask = KVM_PAGES_PER_HPAGE(cur_level) -
- KVM_PAGES_PER_HPAGE(cur_level - 1);
+ page_mask = KVM_PAGES_PER_HPAGE(cur_level) -
+ KVM_PAGES_PER_HPAGE(cur_level - 1);
fault->pfn |= fault->gfn & page_mask;
fault->goal_level--;
}
--
2.35.1.894.gb6a874cedc-goog
Powered by blists - more mailing lists