[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250501111552.GO4198@noisy.programming.kicks-ass.net>
Date: Thu, 1 May 2025 13:15:52 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Marc Zyngier <maz@...nel.org>
Cc: Maxim Levitsky <mlevitsk@...hat.com>, kvm@...r.kernel.org,
linux-riscv@...ts.infradead.org,
Kunkun Jiang <jiangkunkun@...wei.com>,
Waiman Long <longman@...hat.com>, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
Catalin Marinas <catalin.marinas@....com>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Boqun Feng <boqun.feng@...il.com>, Borislav Petkov <bp@...en8.de>,
Albert Ou <aou@...s.berkeley.edu>, Anup Patel <anup@...infault.org>,
Paul Walmsley <paul.walmsley@...ive.com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Palmer Dabbelt <palmer@...belt.com>,
Alexandre Ghiti <alex@...ti.fr>,
Alexander Potapenko <glider@...gle.com>,
Oliver Upton <oliver.upton@...ux.dev>,
Andre Przywara <andre.przywara@....com>, x86@...nel.org,
Joey Gouly <joey.gouly@....com>,
Thomas Gleixner <tglx@...utronix.de>, kvm-riscv@...ts.infradead.org,
Atish Patra <atishp@...shpatra.org>, Ingo Molnar <mingo@...hat.com>,
Jing Zhang <jingzhangos@...gle.com>,
"H. Peter Anvin" <hpa@...or.com>,
Dave Hansen <dave.hansen@...ux.intel.com>, kvmarm@...ts.linux.dev,
Will Deacon <will@...nel.org>,
Keisuke Nishimura <keisuke.nishimura@...ia.fr>,
Sebastian Ott <sebott@...hat.com>, Shusen Li <lishusen2@...wei.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Randy Dunlap <rdunlap@...radead.org>,
Sean Christopherson <seanjc@...gle.com>,
Zenghui Yu <yuzenghui@...wei.com>
Subject: Re: [PATCH v4 2/5] arm64: KVM: use mutex_trylock_nest_lock when
locking all vCPUs
On Thu, May 01, 2025 at 09:24:11AM +0100, Marc Zyngier wrote:
> nit: in keeping with the existing arm64 patches, please write the
> subject as "KVM: arm64: Use ..."
>
> On Wed, 30 Apr 2025 21:30:10 +0100,
> Maxim Levitsky <mlevitsk@...hat.com> wrote:
>
> [...]
>
> >
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 68fec8c95fee..d31f42a71bdc 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -1914,49 +1914,6 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
> > }
> > }
> >
> > -/* unlocks vcpus from @vcpu_lock_idx and smaller */
> > -static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx)
> > -{
> > - struct kvm_vcpu *tmp_vcpu;
> > -
> > - for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) {
> > - tmp_vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx);
> > - mutex_unlock(&tmp_vcpu->mutex);
> > - }
> > -}
> > -
> > -void unlock_all_vcpus(struct kvm *kvm)
> > -{
> > - lockdep_assert_held(&kvm->lock);
>
> Note this assertion...
>
> > -
> > - unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1);
> > -}
> > -
> > -/* Returns true if all vcpus were locked, false otherwise */
> > -bool lock_all_vcpus(struct kvm *kvm)
> > -{
> > - struct kvm_vcpu *tmp_vcpu;
> > - unsigned long c;
> > -
> > - lockdep_assert_held(&kvm->lock);
>
> and this one...
>
> > -
> > - /*
> > - * Any time a vcpu is in an ioctl (including running), the
> > - * core KVM code tries to grab the vcpu->mutex.
> > - *
> > - * By grabbing the vcpu->mutex of all VCPUs we ensure that no
> > - * other VCPUs can fiddle with the state while we access it.
> > - */
> > - kvm_for_each_vcpu(c, tmp_vcpu, kvm) {
> > - if (!mutex_trylock(&tmp_vcpu->mutex)) {
> > - unlock_vcpus(kvm, c - 1);
> > - return false;
> > - }
> > - }
> > -
> > - return true;
> > -}
> > -
> > static unsigned long nvhe_percpu_size(void)
> > {
> > return (unsigned long)CHOOSE_NVHE_SYM(__per_cpu_end) -
>
> [...]
>
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 69782df3617f..834f08dfa24c 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -1368,6 +1368,40 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
> > return 0;
> > }
> >
> > +/*
> > + * Try to lock all of the VM's vCPUs.
> > + * Assumes that the kvm->lock is held.
>
> Assuming is not enough. These assertions have caught a number of bugs,
> and I'm not prepared to drop them.
>
> > + */
> > +int kvm_trylock_all_vcpus(struct kvm *kvm)
> > +{
> > + struct kvm_vcpu *vcpu;
> > + unsigned long i, j;
> > +
> > + kvm_for_each_vcpu(i, vcpu, kvm)
> > + if (!mutex_trylock_nest_lock(&vcpu->mutex, &kvm->lock))
This one includes an assertion that kvm->lock is actually held.
That said, I'm not at all sure what the purpose of all this trylock
stuff is here.
Can someone explain? Last time I asked someone said something about
multiple VMs, but I don't know enough about kvm to know what that means.
Are those vcpu->mutex another class for other VMs? Or what gives?
Powered by blists - more mailing lists