[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <552C91BA.1010703@linux.intel.com>
Date: Tue, 14 Apr 2015 12:04:10 +0800
From: Xiao Guangrong <guangrong.xiao@...ux.intel.com>
To: Andres Lagar-Cavilla <andreslc@...gle.com>
CC: Wanpeng Li <wanpeng.li@...ux.intel.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Eric Northup <digitaleric@...gle.com>
Subject: [PATCH] KVM: MMU: fix comment in kvm_mmu_zap_collapsible_spte
Soft mmu uses direct shadow page to fill guest large mapping with small pages
if huge mamping is disallowed on host. So zapping direct shadow page works well
both for soft mmu and hard mmu
Fix the comment to reflect this truth
Signed-off-by: Xiao Guangrong <guangrong.xiao@...ux.intel.com>
---
arch/x86/kvm/mmu.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 146f295..68c5487 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4481,9 +4481,11 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
pfn = spte_to_pfn(*sptep);
/*
- * Only EPT supported for now; otherwise, one would need to
- * find out efficiently whether the guest page tables are
- * also using huge pages.
+ * We can not do huge page mapping for the indirect shadow
+ * page (sp) found on the last rmap (level = 1 ) since
+ * indirect sp is synced with the page table in guest and
+ * indirect sp->level = 1 means the guest page table is
+ * using 4K page size mapping.
*/
if (sp->role.direct &&
!kvm_is_reserved_pfn(pfn) &&
--
2.1.0
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists