[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190917085304.16987-10-weijiang.yang@intel.com>
Date: Tue, 17 Sep 2019 16:53:04 +0800
From: Yang Weijiang <weijiang.yang@...el.com>
To: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
pbonzini@...hat.com, sean.j.christopherson@...el.com
Cc: mst@...hat.com, rkrcmar@...hat.com, jmattson@...gle.com,
yu.c.zhang@...el.com, alazar@...defender.com,
Yang Weijiang <weijiang.yang@...el.com>
Subject: [PATCH v5 9/9] mmu: spp: Handle SPP protected pages when VM memory changes
Host page swapping/migration may change the translation in
EPT leaf entry, if the target page is SPP protected,
re-enable SPP protection in MMU notifier. If SPPT shadow
page is reclaimed, the level1 pages don't have rmap to clear.
Signed-off-by: Yang Weijiang <weijiang.yang@...el.com>
---
arch/x86/kvm/mmu.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c9c430d2c7e3..c1c744ab05c9 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1828,6 +1828,24 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
new_spte &= ~PT_WRITABLE_MASK;
new_spte &= ~SPTE_HOST_WRITEABLE;
+ /*
+ * if it's EPT leaf entry and the physical page is
+ * SPP protected, then re-enable SPP protection for
+ * the page.
+ */
+ if (kvm->arch.spp_active &&
+ level == PT_PAGE_TABLE_LEVEL) {
+ struct kvm_subpage spp_info = {0};
+ int i;
+
+ spp_info.base_gfn = gfn;
+ spp_info.npages = 1;
+ i = kvm_spp_get_permission(kvm, &spp_info);
+ if (i == 1 &&
+ spp_info.access_map[0] != FULL_SPP_ACCESS)
+ new_spte |= PT_SPP_MASK;
+ }
+
new_spte = mark_spte_for_access_track(new_spte);
mmu_spte_clear_track_bits(sptep);
@@ -2677,6 +2695,10 @@ static bool mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
pte = *spte;
if (is_shadow_present_pte(pte)) {
if (is_last_spte(pte, sp->role.level)) {
+ /* SPPT leaf entries don't have rmaps*/
+ if (sp->role.level == PT_PAGE_TABLE_LEVEL &&
+ is_spp_spte(sp))
+ return true;
drop_spte(kvm, spte);
if (is_large_pte(pte))
--kvm->stat.lpages;
--
2.17.2
Powered by blists - more mailing lists