[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1447412302-3504-1-git-send-email-pbonzini@redhat.com>
Date: Fri, 13 Nov 2015 11:58:22 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: Yang Zhang <yang.z.zhang@...el.com>,
Takuya Yoshikawa <yoshikawa_takuya_b1@....ntt.co.jp>
Subject: [PATCH] KVM: x86: always set accessed bit in shadow PTEs
Commit 7a1638ce4220 ("nEPT: Redefine EPT-specific link_shadow_page()",
2013-08-05) says:
Since nEPT doesn't support A/D bit, we should not set those bit
when building the shadow page table.
but this is not necessary. Even though nEPT doesn't support A/D
bits, and hence the vmcs12 EPT pointer will never enable them,
we always use them for shadow page tables if available (see
construct_eptp in vmx.c). So we can set the A/D bits freely
in the shadow page table.
This patch hence basically reverts commit 7a1638ce4220.
Cc: Yang Zhang <yang.z.zhang@...el.com>
Cc: Takuya Yoshikawa <yoshikawa_takuya_b1@....ntt.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
---
arch/x86/kvm/mmu.c | 9 +++------
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 5 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d31b55d1fd5c..ec4e0218ab0a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2195,7 +2195,7 @@ static void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator)
}
static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep,
- struct kvm_mmu_page *sp, bool accessed)
+ struct kvm_mmu_page *sp)
{
u64 spte;
@@ -2203,10 +2203,7 @@ static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep,
VMX_EPT_WRITABLE_MASK != PT_WRITABLE_MASK);
spte = __pa(sp->spt) | PT_PRESENT_MASK | PT_WRITABLE_MASK |
- shadow_user_mask | shadow_x_mask;
-
- if (accessed)
- spte |= shadow_accessed_mask;
+ shadow_user_mask | shadow_x_mask | shadow_accessed_mask;
mmu_spte_set(sptep, spte);
@@ -2736,7 +2733,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, int map_writable,
sp = kvm_mmu_get_page(vcpu, pseudo_gfn, iterator.addr,
iterator.level - 1, 1, ACC_ALL);
- link_shadow_page(vcpu, iterator.sptep, sp, true);
+ link_shadow_page(vcpu, iterator.sptep, sp);
}
}
return emulate;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index ee9d21115535..91e939b486d1 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -598,7 +598,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
goto out_gpte_changed;
if (sp)
- link_shadow_page(vcpu, it.sptep, sp, PT_GUEST_ACCESSED_MASK);
+ link_shadow_page(vcpu, it.sptep, sp);
}
for (;
@@ -618,7 +618,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
true, direct_access);
- link_shadow_page(vcpu, it.sptep, sp, PT_GUEST_ACCESSED_MASK);
+ link_shadow_page(vcpu, it.sptep, sp);
}
clear_sp_write_flooding_count(it.sptep);
--
1.8.3.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists