[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230220084500.938739-1-jun.miao@intel.com>
Date: Mon, 20 Feb 2023 16:45:00 +0800
From: Jun Miao <jun.miao@...el.com>
To: pbonzini@...hat.com, corbet@....net
Cc: seanjc@...gle.com, maciej.szmigiero@...cle.com,
kvm@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, jun.miao@...el.com
Subject: [PATCH] KVM: Align the function name of kvm_swap_active_memslots()
The function of install_new_memslots() is replaced by kvm_swap_active_memslots().
In order to avoid confusion, align the name in the comments which always be ignored.
Fixes: a54d806688fe "KVM: Keep memslots in tree-based structures instead of array-based ones")
Signed-off-by: Jun Miao <jun.miao@...el.com>
---
Documentation/virt/kvm/locking.rst | 2 +-
include/linux/kvm_host.h | 4 ++--
virt/kvm/kvm_main.c | 4 ++--
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
index 14c4e9fa501d..ac0e549a3ae7 100644
--- a/Documentation/virt/kvm/locking.rst
+++ b/Documentation/virt/kvm/locking.rst
@@ -21,7 +21,7 @@ The acquisition orders for mutexes are as follows:
- kvm->mn_active_invalidate_count ensures that pairs of
invalidate_range_start() and invalidate_range_end() callbacks
use the same memslots array. kvm->slots_lock and kvm->slots_arch_lock
- are taken on the waiting side in install_new_memslots, so MMU notifiers
+ are taken on the waiting side in kvm_swap_active_memslots, so MMU notifiers
must not take either kvm->slots_lock or kvm->slots_arch_lock.
For SRCU:
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 8ada23756b0e..7f8242dd2745 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -58,7 +58,7 @@
/*
* Bit 63 of the memslot generation number is an "update in-progress flag",
- * e.g. is temporarily set for the duration of install_new_memslots().
+ * e.g. is temporarily set for the duration of kvm_swap_active_memslots().
* This flag effectively creates a unique generation number that is used to
* mark cached memslot data, e.g. MMIO accesses, as potentially being stale,
* i.e. may (or may not) have come from the previous memslots generation.
@@ -713,7 +713,7 @@ struct kvm {
* use by the VM. To be used under the slots_lock (above) or in a
* kvm->srcu critical section where acquiring the slots_lock would
* lead to deadlock with the synchronize_srcu in
- * install_new_memslots.
+ * kvm_swap_active_memslots.
*/
struct mutex slots_arch_lock;
struct mm_struct *mm; /* userspace tied to this vm */
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d255964ec331..7a4ff9fc5978 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1298,7 +1298,7 @@ static void kvm_destroy_vm(struct kvm *kvm)
* At this point, pending calls to invalidate_range_start()
* have completed but no more MMU notifiers will run, so
* mn_active_invalidate_count may remain unbalanced.
- * No threads can be waiting in install_new_memslots as the
+ * No threads can be waiting in kvm_swap_active_memslots as the
* last reference on KVM has been dropped, but freeing
* memslots would deadlock without this manual intervention.
*/
@@ -1748,7 +1748,7 @@ static void kvm_invalidate_memslot(struct kvm *kvm,
/*
* Copy the arch-specific field of the newly-installed slot back to the
* old slot as the arch data could have changed between releasing
- * slots_arch_lock in install_new_memslots() and re-acquiring the lock
+ * slots_arch_lock in kvm_swap_active_memslots() and re-acquiring the lock
* above. Writers are required to retrieve memslots *after* acquiring
* slots_arch_lock, thus the active slot's data is guaranteed to be fresh.
*/
--
2.32.0
Powered by blists - more mailing lists