lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 6 Jul 2018 14:03:23 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Guo Ren <ren_guo@...ky.com>
Cc:     linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
        tglx@...utronix.de, daniel.lezcano@...aro.org,
        jason@...edaemon.net, arnd@...db.de, c-sky_gcc_upstream@...ky.com,
        gnu-csky@...tor.com, thomas.petazzoni@...tlin.com,
        wbx@...ibc-ng.org, green.hu@...il.com
Subject: Re: [PATCH V2 11/19] csky: Atomic operations

On Fri, Jul 06, 2018 at 07:44:03PM +0800, Guo Ren wrote:
> On Thu, Jul 05, 2018 at 07:59:02PM +0200, Peter Zijlstra wrote:
> > On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> > 
> > > +static inline void arch_spin_lock(arch_spinlock_t *lock)
> > > +{
> > > +	unsigned int *p = &lock->lock;
> > > +	unsigned int tmp;
> > > +
> > > +	asm volatile (
> > > +		"1:	ldex.w		%0, (%1) \n"
> > > +		"	bnez		%0, 1b   \n"
> > > +		"	movi		%0, 1    \n"
> > > +		"	stex.w		%0, (%1) \n"
> > > +		"	bez		%0, 1b   \n"
> > > +		: "=&r" (tmp)
> > > +		: "r"(p)
> > > +		: "memory");
> > > +	smp_mb();
> > > +}
> > 
> > Test-and-set with MB acting as ACQUIRE, ok.
> Em ... Ok, I'll try to use test-and-set function instead of it.

"test-and-set" is just the name of this type of spinlock implementation.

You _could_ use the linux test_and_set bitop, but those are defined on
unsigned long and spinlock_t is generally assumed to be of unsigned int
size.

Go with the ticket locks as per below.

> > Also, the fact that you need MB for release implies your LDEX does not
> > in fact imply anything and your xchg/cmpxchg implementation is broken.
> xchg/cmxchg broken without 1th smp_mb()? Why we need protect the
> instructions flow before the ldex.w?

See the email I send earlier in that thread.

> Ok, I'll try to implement ticket lock in next version patch.

If you need inspiration, look at:

  git show 81bb5c6420635dfd058c210bd342c29c95ccd145^1:arch/arm64/include/asm/spinlock.h

Or look at the current version of that file and ignore the LSE version.

Note that unlock is a store half-word (u16), not having seen your arch
manual yet I don't know if you even have that.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ