[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1371632965-20077-3-git-send-email-xiaoguangrong@linux.vnet.ibm.com>
Date: Wed, 19 Jun 2013 17:09:20 +0800
From: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
To: gleb@...hat.com
Cc: avi.kivity@...il.com, mtosatti@...hat.com, pbonzini@...hat.com,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
Subject: [PATCH 2/7] KVM: MMU: document clear_spte_count
Document it to Documentation/virtual/kvm/mmu.txt
Signed-off-by: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
---
Documentation/virtual/kvm/mmu.txt | 4 ++++
arch/x86/include/asm/kvm_host.h | 5 +++++
arch/x86/kvm/mmu.c | 7 ++++---
3 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/Documentation/virtual/kvm/mmu.txt b/Documentation/virtual/kvm/mmu.txt
index 869abcc..ce6df51 100644
--- a/Documentation/virtual/kvm/mmu.txt
+++ b/Documentation/virtual/kvm/mmu.txt
@@ -210,6 +210,10 @@ Shadow pages contain the following information:
A bitmap indicating which sptes in spt point (directly or indirectly) at
pages that may be unsynchronized. Used to quickly locate all unsychronized
pages reachable from a given page.
+ clear_spte_count:
+ It is only used on 32bit host which helps us to detect whether updating the
+ 64bit spte is complete so that we can avoid reading the truncated value out
+ of mmu-lock.
Reverse map
===========
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 966f265..1dac2c1 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -226,6 +226,11 @@ struct kvm_mmu_page {
DECLARE_BITMAP(unsync_child_bitmap, 512);
#ifdef CONFIG_X86_32
+ /*
+ * Count after the page's spte has been cleared to avoid
+ * the truncated value is read out of mmu-lock.
+ * please see the comments in __get_spte_lockless().
+ */
int clear_spte_count;
#endif
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c87b19d..77d516c 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -464,9 +464,10 @@ static u64 __update_clear_spte_slow(u64 *sptep, u64 spte)
/*
* The idea using the light way get the spte on x86_32 guest is from
* gup_get_pte(arch/x86/mm/gup.c).
- * The difference is we can not catch the spte tlb flush if we leave
- * guest mode, so we emulate it by increase clear_spte_count when spte
- * is cleared.
+ * The difference is we can not immediately catch the spte tlb since
+ * kvm may collapse tlb flush some times. Please see kvm_set_pte_rmapp.
+ *
+ * We emulate it by increase clear_spte_count when spte is cleared.
*/
static u64 __get_spte_lockless(u64 *sptep)
{
--
1.8.1.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists