lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87seok25qx.wl-maz@kernel.org>
Date: Tue, 11 Feb 2025 09:24:54 +0000
From: Marc Zyngier <maz@...nel.org>
To: Maxim Levitsky <mlevitsk@...hat.com>
Cc: kvm@...r.kernel.org,
	Paolo Bonzini <pbonzini@...hat.com>,
	Jing Zhang <jingzhangos@...gle.com>,
	Oliver Upton <oliver.upton@...ux.dev>,
	linux-arm-kernel@...ts.infradead.org,
	linux-kernel@...r.kernel.org,
	Randy Dunlap <rdunlap@...radead.org>,
	Suzuki K Poulose <suzuki.poulose@....com>,
	Palmer Dabbelt <palmer@...belt.com>,
	Zenghui Yu <yuzenghui@...wei.com>,
	kvm-riscv@...ts.infradead.org,
	Ingo Molnar <mingo@...hat.com>,
	linux-riscv@...ts.infradead.org,
	Joey Gouly <joey.gouly@....com>,
	Paul Walmsley <paul.walmsley@...ive.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Bjorn Helgaas <bhelgaas@...gle.com>,
	Albert Ou <aou@...s.berkeley.edu>,
	kvmarm@...ts.linux.dev,
	Alexander Potapenko <glider@...gle.com>,
	x86@...nel.org,
	Sean Christopherson <seanjc@...gle.com>,
	Anup Patel <anup@...infault.org>,
	Kunkun Jiang <jiangkunkun@...wei.com>,
	Atish Patra <atishp@...shpatra.org>,
	Catalin Marinas <catalin.marinas@....com>,
	Will Deacon <will@...nel.org>,
	Borislav Petkov <bp@...en8.de>,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH 2/3] KVM: arm64: switch to using kvm_lock/unlock_all_vcpus

On Tue, 11 Feb 2025 00:09:16 +0000,
Maxim Levitsky <mlevitsk@...hat.com> wrote:
> 
> Switch to kvm_lock/unlock_all_vcpus instead of arm's own
> version.
> 
> This fixes lockdep warning about reaching maximum lock depth:
> 
> [  328.171264] BUG: MAX_LOCK_DEPTH too low!
> [  328.175227] turning off the locking correctness validator.
> [  328.180726] Please attach the output of /proc/lock_stat to the bug report
> [  328.187531] depth: 48  max: 48!
> [  328.190678] 48 locks held by qemu-kvm/11664:
> [  328.194957]  #0: ffff800086de5ba0 (&kvm->lock){+.+.}-{3:3}, at: kvm_ioctl_create_device+0x174/0x5b0
> [  328.204048]  #1: ffff0800e78800b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> [  328.212521]  #2: ffff07ffeee51e98 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> [  328.220991]  #3: ffff0800dc7d80b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> [  328.229463]  #4: ffff07ffe0c980b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> [  328.237934]  #5: ffff0800a3883c78 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> [  328.246405]  #6: ffff07fffbe480b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> 
> No functional change intended.

Actually plenty of it. This sort of broad assertion is really an
indication of the contrary.

> 
> Suggested-by: Paolo Bonzini <pbonzini@...hat.com>
> Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
> ---
>  arch/arm64/include/asm/kvm_host.h     |  3 ---
>  arch/arm64/kvm/arch_timer.c           |  8 +++----
>  arch/arm64/kvm/arm.c                  | 32 ---------------------------
>  arch/arm64/kvm/vgic/vgic-init.c       | 11 +++++----
>  arch/arm64/kvm/vgic/vgic-its.c        | 18 ++++++++-------
>  arch/arm64/kvm/vgic/vgic-kvm-device.c | 21 ++++++++++--------
>  6 files changed, 33 insertions(+), 60 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 7cfa024de4e3..bba97ea700ca 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -1234,9 +1234,6 @@ int __init populate_sysreg_config(const struct sys_reg_desc *sr,
>  				  unsigned int idx);
>  int __init populate_nv_trap_config(void);
>  
> -bool lock_all_vcpus(struct kvm *kvm);
> -void unlock_all_vcpus(struct kvm *kvm);
> -
>  void kvm_calculate_traps(struct kvm_vcpu *vcpu);
>  
>  /* MMIO helpers */
> diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
> index 231c0cd9c7b4..3af1da807f9c 100644
> --- a/arch/arm64/kvm/arch_timer.c
> +++ b/arch/arm64/kvm/arch_timer.c
> @@ -1769,7 +1769,9 @@ int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
>  
>  	mutex_lock(&kvm->lock);
>  
> -	if (lock_all_vcpus(kvm)) {
> +	ret = kvm_lock_all_vcpus(kvm);
> +
> +	if (!ret) {
>  		set_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, &kvm->arch.flags);
>  
>  		/*
> @@ -1781,9 +1783,7 @@ int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
>  		kvm->arch.timer_data.voffset = offset->counter_offset;
>  		kvm->arch.timer_data.poffset = offset->counter_offset;
>  
> -		unlock_all_vcpus(kvm);
> -	} else {
> -		ret = -EBUSY;
> +		kvm_unlock_all_vcpus(kvm);
>  	}

This is a userspace ABI change. This ioctl is documented as being able
to return -EINVAL or -EBUSY, and nothing else other than 0. Yet the
new helper returns -EINTR, which you blindly forward to userspace.

>  
>  	mutex_unlock(&kvm->lock);
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 071a7d75be68..f58849c5b4f0 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1895,38 +1895,6 @@ static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx)
>  	}
>  }
>  
> -void unlock_all_vcpus(struct kvm *kvm)
> -{
> -	lockdep_assert_held(&kvm->lock);
> -
> -	unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1);
> -}
> -
> -/* Returns true if all vcpus were locked, false otherwise */
> -bool lock_all_vcpus(struct kvm *kvm)
> -{
> -	struct kvm_vcpu *tmp_vcpu;
> -	unsigned long c;
> -
> -	lockdep_assert_held(&kvm->lock);
> -
> -	/*
> -	 * Any time a vcpu is in an ioctl (including running), the
> -	 * core KVM code tries to grab the vcpu->mutex.
> -	 *
> -	 * By grabbing the vcpu->mutex of all VCPUs we ensure that no
> -	 * other VCPUs can fiddle with the state while we access it.
> -	 */
> -	kvm_for_each_vcpu(c, tmp_vcpu, kvm) {
> -		if (!mutex_trylock(&tmp_vcpu->mutex)) {
> -			unlock_vcpus(kvm, c - 1);
> -			return false;
> -		}
> -	}
> -
> -	return true;
> -}

The semantics are different.

Other than the return values mentioned above, the new version fails on
signal delivery, which isn't expected.  The guarantee given to
userspace is that unless a vcpu thread is currently in KVM, the
locking will succeed. Not "will succeed unless something that is
outside of your control happens".

The arm64 version is also built around a mutex_trylock() because we
don't want to wait forever until the vcpu's mutex is released. We want
it now, or never. That's consistent with the above requirement on
userspace.

We can argue whether or not these are good guarantees (or
requirements) to give to (or demand from) userspace, but that's what
we have, and I'm not prepared to break any of it.

At the end of the day, the x86 locking serves completely different
purposes. It wants to gracefully wait for vcpus to exit and is happy
to replay things, because migration (which is what x86 seems to be
using this for) is a stupidly long process. Our locking is designed to
either succeed or fail quickly, because some of the lock paths are on
the critical path for VM startup and configuration.

So for this series to be acceptable, you'd have to provide the same
semantics. It is probably doable with a bit of macro magic, at the
expense of readability.

What I would also like to see is for this primitive to be usable with
scoped_cond_guard(), which would make the code much more readable.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ