lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 12 Feb 2020 16:47:56 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Michal Simek <michal.simek@...inx.com>
Cc:     linux-kernel@...r.kernel.org, monstr@...str.eu, git@...inx.com,
        arnd@...db.de, Stefan Asserhall <stefan.asserhall@...inx.com>,
        Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>
Subject: Re: [PATCH 6/7] microblaze: Implement architecture spinlock

On Wed, Feb 12, 2020 at 04:42:28PM +0100, Michal Simek wrote:
> From: Stefan Asserhall <stefan.asserhall@...inx.com>
> 
> Using exclusive loads/stores to implement spinlocks which can be used on
> SMP systems.
> 
> Signed-off-by: Stefan Asserhall <stefan.asserhall@...inx.com>
> Signed-off-by: Michal Simek <michal.simek@...inx.com>
> ---
> 
>  arch/microblaze/include/asm/spinlock.h       | 240 +++++++++++++++++++
>  arch/microblaze/include/asm/spinlock_types.h |  25 ++
>  2 files changed, 265 insertions(+)
>  create mode 100644 arch/microblaze/include/asm/spinlock.h
>  create mode 100644 arch/microblaze/include/asm/spinlock_types.h
> 
> diff --git a/arch/microblaze/include/asm/spinlock.h b/arch/microblaze/include/asm/spinlock.h
> new file mode 100644
> index 000000000000..0199ea9f7f0f
> --- /dev/null
> +++ b/arch/microblaze/include/asm/spinlock.h
> @@ -0,0 +1,240 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2013-2020 Xilinx, Inc.
> + */
> +
> +#ifndef _ASM_MICROBLAZE_SPINLOCK_H
> +#define _ASM_MICROBLAZE_SPINLOCK_H
> +
> +/*
> + * Unlocked value: 0
> + * Locked value: 1
> + */
> +#define arch_spin_is_locked(x)	(READ_ONCE((x)->lock) != 0)
> +
> +static inline void arch_spin_lock(arch_spinlock_t *lock)
> +{
> +	unsigned long tmp;
> +
> +	__asm__ __volatile__ (
> +		/* load conditional address in %1 to %0 */
> +		"1:	lwx	 %0, %1, r0;\n"
> +		/* not zero? try again */
> +		"	bnei	%0, 1b;\n"
> +		/* increment lock by 1 */
> +		"	addi	%0, r0, 1;\n"
> +		/* attempt store */
> +		"	swx	%0, %1, r0;\n"
> +		/* checking msr carry flag */
> +		"	addic	%0, r0, 0;\n"
> +		/* store failed (MSR[C] set)? try again */
> +		"	bnei	%0, 1b;\n"
> +		/* Outputs: temp variable for load result */
> +		: "=&r" (tmp)
> +		/* Inputs: lock address */
> +		: "r" (&lock->lock)
> +		: "cc", "memory"
> +	);
> +}

That's a test-and-set spinlock if I read it correctly. Why? that's the
worst possible spinlock implementation possible.

Powered by blists - more mailing lists