lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <864iy4ivro.wl-maz@kernel.org>
Date: Thu, 01 May 2025 09:24:11 +0100
From: Marc Zyngier <maz@...nel.org>
To: Maxim Levitsky <mlevitsk@...hat.com>
Cc: kvm@...r.kernel.org,
	linux-riscv@...ts.infradead.org,
	Kunkun Jiang <jiangkunkun@...wei.com>,
	Waiman Long <longman@...hat.com>,
	linux-kernel@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org,
	Catalin Marinas <catalin.marinas@....com>,
	Bjorn Helgaas <bhelgaas@...gle.com>,
	Boqun Feng <boqun.feng@...il.com>,
	Borislav Petkov <bp@...en8.de>,
	Albert Ou <aou@...s.berkeley.edu>,
	Anup Patel <anup@...infault.org>,
	Paul Walmsley <paul.walmsley@...ive.com>,
	Suzuki K Poulose <suzuki.poulose@....com>,
	Palmer Dabbelt <palmer@...belt.com>,
	Alexandre Ghiti <alex@...ti.fr>,
	Alexander Potapenko <glider@...gle.com>,
	Oliver Upton <oliver.upton@...ux.dev>,
	Andre Przywara <andre.przywara@....com>,
	x86@...nel.org,
	Joey Gouly <joey.gouly@....com>,
	Thomas Gleixner <tglx@...utronix.de>,
	kvm-riscv@...ts.infradead.org,
	Atish Patra <atishp@...shpatra.org>,
	Ingo Molnar <mingo@...hat.com>,
	Jing Zhang <jingzhangos@...gle.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	kvmarm@...ts.linux.dev,
	Will Deacon <will@...nel.org>,
	Keisuke Nishimura <keisuke.nishimura@...ia.fr>,
	Sebastian Ott <sebott@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Shusen Li <lishusen2@...wei.com>,
	Paolo Bonzini <pbonzini@...hat.com>,
	Randy Dunlap <rdunlap@...radead.org>,
	Sean Christopherson <seanjc@...gle.com>,
	Zenghui Yu <yuzenghui@...wei.com>
Subject: Re: [PATCH v4 2/5] arm64: KVM: use mutex_trylock_nest_lock when locking all vCPUs

nit: in keeping with the existing arm64 patches, please write the
subject as "KVM: arm64: Use ..."

On Wed, 30 Apr 2025 21:30:10 +0100,
Maxim Levitsky <mlevitsk@...hat.com> wrote:

[...]

> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 68fec8c95fee..d31f42a71bdc 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1914,49 +1914,6 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
>  	}
>  }
>  
> -/* unlocks vcpus from @vcpu_lock_idx and smaller */
> -static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx)
> -{
> -	struct kvm_vcpu *tmp_vcpu;
> -
> -	for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) {
> -		tmp_vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx);
> -		mutex_unlock(&tmp_vcpu->mutex);
> -	}
> -}
> -
> -void unlock_all_vcpus(struct kvm *kvm)
> -{
> -	lockdep_assert_held(&kvm->lock);

Note this assertion...

> -
> -	unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1);
> -}
> -
> -/* Returns true if all vcpus were locked, false otherwise */
> -bool lock_all_vcpus(struct kvm *kvm)
> -{
> -	struct kvm_vcpu *tmp_vcpu;
> -	unsigned long c;
> -
> -	lockdep_assert_held(&kvm->lock);

and this one...

> -
> -	/*
> -	 * Any time a vcpu is in an ioctl (including running), the
> -	 * core KVM code tries to grab the vcpu->mutex.
> -	 *
> -	 * By grabbing the vcpu->mutex of all VCPUs we ensure that no
> -	 * other VCPUs can fiddle with the state while we access it.
> -	 */
> -	kvm_for_each_vcpu(c, tmp_vcpu, kvm) {
> -		if (!mutex_trylock(&tmp_vcpu->mutex)) {
> -			unlock_vcpus(kvm, c - 1);
> -			return false;
> -		}
> -	}
> -
> -	return true;
> -}
> -
>  static unsigned long nvhe_percpu_size(void)
>  {
>  	return (unsigned long)CHOOSE_NVHE_SYM(__per_cpu_end) -

[...]

> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 69782df3617f..834f08dfa24c 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1368,6 +1368,40 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
>  	return 0;
>  }
>  
> +/*
> + * Try to lock all of the VM's vCPUs.
> + * Assumes that the kvm->lock is held.

Assuming is not enough. These assertions have caught a number of bugs,
and I'm not prepared to drop them.

> + */
> +int kvm_trylock_all_vcpus(struct kvm *kvm)
> +{
> +	struct kvm_vcpu *vcpu;
> +	unsigned long i, j;
> +
> +	kvm_for_each_vcpu(i, vcpu, kvm)
> +		if (!mutex_trylock_nest_lock(&vcpu->mutex, &kvm->lock))
> +			goto out_unlock;
> +	return 0;
> +
> +out_unlock:
> +	kvm_for_each_vcpu(j, vcpu, kvm) {
> +		if (i == j)
> +			break;
> +		mutex_unlock(&vcpu->mutex);
> +	}
> +	return -EINTR;
> +}
> +EXPORT_SYMBOL_GPL(kvm_trylock_all_vcpus);
> +
> +void kvm_unlock_all_vcpus(struct kvm *kvm)
> +{
> +	struct kvm_vcpu *vcpu;
> +	unsigned long i;
> +
> +	kvm_for_each_vcpu(i, vcpu, kvm)
> +		mutex_unlock(&vcpu->mutex);
> +}
> +EXPORT_SYMBOL_GPL(kvm_unlock_all_vcpus);

I don't mind you not including the assertions in these helpers, but
then the existing primitives have to stay and call into the new stuff.
Which, from a simple patch volume, would be far preferable and help
with managing backports.

I'd also expect the introduction of these new helpers to be done in
its own patch, so that we don't get cross architecture dependencies if
something needs to be backported for a reason or another.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ