[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YO3OomTEhGFo2yee@google.com>
Date: Tue, 13 Jul 2021 17:34:26 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
bgardon@...gle.com
Subject: Re: [PATCH 1/2] KVM: Block memslot updates across range_start() and
range_end()
On Thu, Jun 10, 2021, Paolo Bonzini wrote:
> static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm)
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index fa7e7ebefc79..0dc0726c8d18 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -605,10 +605,13 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
>
> /*
> * .change_pte() must be surrounded by .invalidate_range_{start,end}(),
> - * and so always runs with an elevated notifier count. This obviates
> - * the need to bump the sequence count.
> + * If mmu_notifier_count is zero, then start() didn't find a relevant
> + * memslot and wasn't forced down the slow path; rechecking here is
> + * unnecessary.
> */
> - WARN_ON_ONCE(!kvm->mmu_notifier_count);
> + WARN_ON_ONCE(!READ_ONCE(kvm->mn_active_invalidate_count));
The sanity check on mn_active_invalidate_count can be added in this patch, but
the optimization to return on !mmu_notifier_count should go in the next patch,
i.e. mmu_notifier_count must be non-zero since __kvm_handle_hva_range() always
takes mmu_lock at the time of this patch.
> + if (!kvm->mmu_notifier_count)
> + return;
>
> kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn);
> }
...
> @@ -1281,7 +1322,21 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
> WARN_ON(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS);
> slots->generation = gen | KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS;
>
> + /*
> + * Do not store the new memslots while there are invalidations in
> + * progress (preparatory change for the next commit).
> + */
> + spin_lock(&kvm->mn_invalidate_lock);
> + prepare_to_rcuwait(&kvm->mn_memslots_update_rcuwait);
> + while (kvm->mn_active_invalidate_count) {
Does this need a READ_ONCE()? Or are the spin locks guaranteed to prevent the
compiler from caching mn_active_invalidate_count?
> + set_current_state(TASK_UNINTERRUPTIBLE);
> + spin_unlock(&kvm->mn_invalidate_lock);
> + schedule();
> + spin_lock(&kvm->mn_invalidate_lock);
> + }
> + finish_rcuwait(&kvm->mn_memslots_update_rcuwait);
> rcu_assign_pointer(kvm->memslots[as_id], slots);
> + spin_unlock(&kvm->mn_invalidate_lock);
>
> /*
> * Acquired in kvm_set_memslot. Must be released before synchronize
> --
> 2.27.0
>
>
Powered by blists - more mailing lists