[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aBUyC5kgTipXud-7@google.com>
Date: Fri, 2 May 2025 13:58:51 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Marc Zyngier <maz@...nel.org>
Cc: Maxim Levitsky <mlevitsk@...hat.com>, kvm@...r.kernel.org,
linux-riscv@...ts.infradead.org, Kunkun Jiang <jiangkunkun@...wei.com>,
Waiman Long <longman@...hat.com>, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
Catalin Marinas <catalin.marinas@....com>, Bjorn Helgaas <bhelgaas@...gle.com>,
Boqun Feng <boqun.feng@...il.com>, Borislav Petkov <bp@...en8.de>, Albert Ou <aou@...s.berkeley.edu>,
Anup Patel <anup@...infault.org>, Paul Walmsley <paul.walmsley@...ive.com>,
Suzuki K Poulose <suzuki.poulose@....com>, Palmer Dabbelt <palmer@...belt.com>,
Alexandre Ghiti <alex@...ti.fr>, Alexander Potapenko <glider@...gle.com>, Oliver Upton <oliver.upton@...ux.dev>,
Andre Przywara <andre.przywara@....com>, x86@...nel.org, Joey Gouly <joey.gouly@....com>,
Thomas Gleixner <tglx@...utronix.de>, kvm-riscv@...ts.infradead.org,
Atish Patra <atishp@...shpatra.org>, Ingo Molnar <mingo@...hat.com>,
Jing Zhang <jingzhangos@...gle.com>, "H. Peter Anvin" <hpa@...or.com>,
Dave Hansen <dave.hansen@...ux.intel.com>, kvmarm@...ts.linux.dev,
Will Deacon <will@...nel.org>, Keisuke Nishimura <keisuke.nishimura@...ia.fr>,
Sebastian Ott <sebott@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Shusen Li <lishusen2@...wei.com>, Paolo Bonzini <pbonzini@...hat.com>,
Randy Dunlap <rdunlap@...radead.org>, Zenghui Yu <yuzenghui@...wei.com>
Subject: Re: [PATCH v4 2/5] arm64: KVM: use mutex_trylock_nest_lock when
locking all vCPUs
On Thu, May 01, 2025, Marc Zyngier wrote:
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 69782df3617f..834f08dfa24c 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -1368,6 +1368,40 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
> > return 0;
> > }
> >
> > +/*
> > + * Try to lock all of the VM's vCPUs.
> > + * Assumes that the kvm->lock is held.
>
> Assuming is not enough. These assertions have caught a number of bugs,
> and I'm not prepared to drop them.
>
> > + */
> > +int kvm_trylock_all_vcpus(struct kvm *kvm)
> > +{
> > + struct kvm_vcpu *vcpu;
> > + unsigned long i, j;
> > +
> > + kvm_for_each_vcpu(i, vcpu, kvm)
> > + if (!mutex_trylock_nest_lock(&vcpu->mutex, &kvm->lock))
> > + goto out_unlock;
> > + return 0;
> > +
> > +out_unlock:
> > + kvm_for_each_vcpu(j, vcpu, kvm) {
> > + if (i == j)
> > + break;
> > + mutex_unlock(&vcpu->mutex);
> > + }
> > + return -EINTR;
> > +}
> > +EXPORT_SYMBOL_GPL(kvm_trylock_all_vcpus);
> > +
> > +void kvm_unlock_all_vcpus(struct kvm *kvm)
> > +{
> > + struct kvm_vcpu *vcpu;
> > + unsigned long i;
> > +
> > + kvm_for_each_vcpu(i, vcpu, kvm)
> > + mutex_unlock(&vcpu->mutex);
> > +}
> > +EXPORT_SYMBOL_GPL(kvm_unlock_all_vcpus);
>
> I don't mind you not including the assertions in these helpers,
I do :-) I see no reason not to add assertions here, if locking all vCPUs is
a hot path, we've probably got bigger problems.
Powered by blists - more mailing lists