[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YGcxRmzbEr3kPsWE@google.com>
Date: Fri, 2 Apr 2021 14:59:18 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Marc Zyngier <maz@...nel.org>, Huacai Chen <chenhuacai@...nel.org>,
Aleksandar Markovic <aleksandar.qemu.devel@...il.com>,
Paul Mackerras <paulus@...abs.org>,
James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
linux-mips@...r.kernel.org, kvm@...r.kernel.org,
kvm-ppc@...r.kernel.org, linux-kernel@...r.kernel.org,
Ben Gardon <bgardon@...gle.com>
Subject: Re: [PATCH v2 09/10] KVM: Don't take mmu_lock for range invalidation
unless necessary
On Fri, Apr 02, 2021, Paolo Bonzini wrote:
> On 02/04/21 02:56, Sean Christopherson wrote:
> > Avoid taking mmu_lock for unrelated .invalidate_range_{start,end}()
> > notifications. Because mmu_notifier_count must be modified while holding
> > mmu_lock for write, and must always be paired across start->end to stay
> > balanced, lock elision must happen in both or none. To meet that
> > requirement, add a rwsem to prevent memslot updates across range_start()
> > and range_end().
> >
> > Use a rwsem instead of a rwlock since most notifiers _allow_ blocking,
> > and the lock will be endl across the entire start() ... end() sequence.
> > If anything in the sequence sleeps, including the caller or a different
> > notifier, holding the spinlock would be disastrous.
> >
> > For notifiers that _disallow_ blocking, e.g. OOM reaping, simply go down
> > the slow path of unconditionally acquiring mmu_lock. The sane
> > alternative would be to try to acquire the lock and force the notifier
> > to retry on failure. But since OOM is currently the _only_ scenario
> > where blocking is disallowed attempting to optimize a guest that has been
> > marked for death is pointless.
> >
> > Unconditionally define and use mmu_notifier_slots_lock in the memslots
> > code, purely to avoid more #ifdefs. The overhead of acquiring the lock
> > is negligible when the lock is uncontested, which will always be the case
> > when the MMU notifiers are not used.
> >
> > Note, technically flag-only memslot updates could be allowed in parallel,
> > but stalling a memslot update for a relatively short amount of time is
> > not a scalability issue, and this is all more than complex enough.
>
> Proposal for the locking documentation:
Argh, sorry! Looks great, I owe you.
> diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
> index b21a34c34a21..3e4ad7de36cb 100644
> --- a/Documentation/virt/kvm/locking.rst
> +++ b/Documentation/virt/kvm/locking.rst
> @@ -16,6 +16,13 @@ The acquisition orders for mutexes are as follows:
> - kvm->slots_lock is taken outside kvm->irq_lock, though acquiring
> them together is quite rare.
> +- The kvm->mmu_notifier_slots_lock rwsem ensures that pairs of
> + invalidate_range_start() and invalidate_range_end() callbacks
> + use the same memslots array. kvm->slots_lock is taken outside the
> + write-side critical section of kvm->mmu_notifier_slots_lock, so
> + MMU notifiers must not take kvm->slots_lock. No other write-side
> + critical sections should be added.
> +
> On x86, vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock.
> Everything else is a leaf: no other lock is taken inside the critical
>
> Paolo
>
Powered by blists - more mailing lists