lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 5 Jul 2018 19:59:02 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Guo Ren <ren_guo@...ky.com>
Cc:     linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
        tglx@...utronix.de, daniel.lezcano@...aro.org,
        jason@...edaemon.net, arnd@...db.de, c-sky_gcc_upstream@...ky.com,
        gnu-csky@...tor.com, thomas.petazzoni@...tlin.com,
        wbx@...ibc-ng.org, green.hu@...il.com
Subject: Re: [PATCH V2 11/19] csky: Atomic operations

On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:

> +static inline void arch_spin_lock(arch_spinlock_t *lock)
> +{
> +	unsigned int *p = &lock->lock;
> +	unsigned int tmp;
> +
> +	asm volatile (
> +		"1:	ldex.w		%0, (%1) \n"
> +		"	bnez		%0, 1b   \n"
> +		"	movi		%0, 1    \n"
> +		"	stex.w		%0, (%1) \n"
> +		"	bez		%0, 1b   \n"
> +		: "=&r" (tmp)
> +		: "r"(p)
> +		: "memory");
> +	smp_mb();
> +}

Test-and-set with MB acting as ACQUIRE, ok.

> +static inline void arch_spin_unlock(arch_spinlock_t *lock)
> +{
> +	unsigned int *p = &lock->lock;
> +	unsigned int tmp;
> +
> +	smp_mb();
> +	asm volatile (
> +		"1:	ldex.w		%0, (%1) \n"
> +		"	movi		%0, 0    \n"
> +		"	stex.w		%0, (%1) \n"
> +		"	bez		%0, 1b   \n"
> +		: "=&r" (tmp)
> +		: "r"(p)
> +		: "memory");
> +}

MB acting for RELEASE, but _why_ are you using a LDEX/STEX to clear the
lock word? Would not a normal store work?

Also, the fact that you need MB for release implies your LDEX does not
in fact imply anything and your xchg/cmpxchg implementation is broken.

> +static inline int arch_spin_trylock(arch_spinlock_t *lock)
> +{
> +	unsigned int *p = &lock->lock;
> +	unsigned int tmp;
> +
> +	asm volatile (
> +		"1:	ldex.w		%0, (%1) \n"
> +		"	bnez		%0, 2f   \n"
> +		"	movi		%0, 1    \n"
> +		"	stex.w		%0, (%1) \n"
> +		"	bez		%0, 1b   \n"
> +		"	movi		%0, 0    \n"
> +		"2:				 \n"
> +		: "=&r" (tmp)
> +		: "r"(p)
> +		: "memory");
> +	smp_mb();
> +
> +	return !tmp;
> +}

Strictly speaking you can avoid the MB on failure. You only need to
provide ACQUIRE semantics on success.

That said, I would really suggest you implement a ticket lock instead of
a test-and-set lock. They're not really all that complicated and do
provide better worst case behaviour.


> +/****** read lock/unlock/trylock ******/

Please have a look at using qrwlock -- esp. if you implement a ticket
lock, then the rwlock comes for 'free'.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ