[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200305155709.118503-2-peterx@redhat.com>
Date: Thu, 5 Mar 2020 10:57:08 -0500
From: Peter Xu <peterx@...hat.com>
To: kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: linmiaohe@...wei.com, Paolo Bonzini <pbonzini@...hat.com>,
peterx@...hat.com
Subject: [PATCH v2 1/2] KVM: Documentation: Update fast page fault for indirect sp
gfn_to_pfn_atomic() is not used anywhere. Before dropping it,
reorganize the locking document to state the fact that we're not
enabling fast page fault for indirect sps. The previous wording is
confusing that it seems we have implemented it however it's not.
Signed-off-by: Peter Xu <peterx@...hat.com>
---
Documentation/virt/kvm/locking.rst | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
index c02291beac3f..d045b2a89505 100644
--- a/Documentation/virt/kvm/locking.rst
+++ b/Documentation/virt/kvm/locking.rst
@@ -96,8 +96,10 @@ will happen:
We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap.
For direct sp, we can easily avoid it since the spte of direct sp is fixed
-to gfn. For indirect sp, before we do cmpxchg, we call gfn_to_pfn_atomic()
-to pin gfn to pfn, because after gfn_to_pfn_atomic():
+to gfn. For indirect sp, we disabled fast page fault for simplicity.
+
+A solution for indirect sp is that, before we do cmpxchg, we pin the
+pfn of the gfn atomically. After the pinning:
- We have held the refcount of pfn that means the pfn can not be freed and
be reused for another gfn.
@@ -106,9 +108,6 @@ to pin gfn to pfn, because after gfn_to_pfn_atomic():
Then, we can ensure the dirty bitmaps is correctly set for a gfn.
-Currently, to simplify the whole things, we disable fast page fault for
-indirect shadow page.
-
2) Dirty bit tracking
In the origin code, the spte can be fast updated (non-atomically) if the
--
2.24.1
Powered by blists - more mailing lists