[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e7dc7d0-f5dc-85d9-1c50-d23b761b5ff3@redhat.com>
Date: Wed, 31 Mar 2021 10:35:33 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>,
Marc Zyngier <maz@...nel.org>,
Huacai Chen <chenhuacai@...nel.org>,
Aleksandar Markovic <aleksandar.qemu.devel@...il.com>,
Paul Mackerras <paulus@...abs.org>
Cc: James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
linux-mips@...r.kernel.org, kvm@...r.kernel.org,
kvm-ppc@...r.kernel.org, linux-kernel@...r.kernel.org,
Ben Gardon <bgardon@...gle.com>
Subject: Re: [PATCH 16/18] KVM: Don't take mmu_lock for range invalidation
unless necessary
On 26/03/21 03:19, Sean Christopherson wrote:
> + /*
> + * Reset the lock used to prevent memslot updates between MMU notifier
> + * range_start and range_end. At this point no more MMU notifiers will
> + * run, but the lock could still be held if KVM's notifier was removed
> + * between range_start and range_end. No threads can be waiting on the
> + * lock as the last reference on KVM has been dropped. If the lock is
> + * still held, freeing memslots will deadlock.
> + */
> + init_rwsem(&kvm->mmu_notifier_slots_lock);
I was going to say that this is nasty, then I noticed that
mmu_notifier_unregister uses SRCU to ensure completion of concurrent
calls to the MMU notifier. So I guess it's fine, but it's better to
point it out:
/*
* At this point no more MMU notifiers will run and pending
* calls to range_start have completed, but the lock would
* still be held and never released if the MMU notifier was
* removed between range_start and range_end. Since the last
* reference to the struct kvm has been dropped, no threads can
* be waiting on the lock, but we might still end up taking it
* when freeing memslots in kvm_arch_destroy_vm. Reset the lock
* to avoid deadlocks.
*/
That said, the easiest way to avoid this would be to always update
mmu_notifier_count. I don't mind the rwsem, but at least I suggest that
you split the patch in two---the first one keeping the
mmu_notifier_count update unconditional, and the second one introducing
the rwsem and the on_lock function kvm_inc_notifier_count. Please
document the new lock in Documentation/virt/kvm/locking.rst too.
Also, related to the first part of the series, perhaps you could
structure the series in a slightly different way:
1) introduce the HVA walking API in common code, complete with on_lock
and patch 15, so that you can use on_lock to increase mmu_notifier_seq
2) then migrate all architectures including x86 to the new API
IOW, first half of patch 10 and all of patch 15; then the second half of
patch 10; then patches 11-14.
> +#if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)
> + down_write(&kvm->mmu_notifier_slots_lock);
> +#endif
> rcu_assign_pointer(kvm->memslots[as_id], slots);
> +#if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)
> + up_write(&kvm->mmu_notifier_slots_lock);
> +#endif
Please do this unconditionally, the cost is minimal if the rwsem is not
contended (as is the case if the architecture doesn't use MMU notifiers
at all).
Paolo
Powered by blists - more mailing lists