lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 6 Jul 2018 13:56:14 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Guo Ren <ren_guo@...ky.com>
Cc:     linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
        tglx@...utronix.de, daniel.lezcano@...aro.org,
        jason@...edaemon.net, arnd@...db.de, c-sky_gcc_upstream@...ky.com,
        gnu-csky@...tor.com, thomas.petazzoni@...tlin.com,
        wbx@...ibc-ng.org, green.hu@...il.com,
        Will Deacon <will.deacon@....com>
Subject: Re: [PATCH V2 11/19] csky: Atomic operations

On Fri, Jul 06, 2018 at 07:01:31PM +0800, Guo Ren wrote:
> On Thu, Jul 05, 2018 at 07:50:59PM +0200, Peter Zijlstra wrote:

> > What's the memory ordering rules for your LDEX/STEX ?
> Every CPU has a local exclusive monitor.
> 
> "Ldex rz, (rx, #off)" will add an entry into the local monitor, and the 
> entry is composed of a address tag and a exclusive flag (inited with 1). 
> Any stores (include other cores') will break the exclusive flag to 0 in
> the entry which could be indexed by the address tag.
> 
> "Stex rz, (rx, #off)" has two condition:
> 1. Store Success: When the entry's exclusive flag is 1, it will store rz
> to the [rx + off] address and the rz will be set to 1.
> 2. Store Failure: When the entry's exclusive flag is 0, just rz will be
> set to 0.

That's how LL/SC works. What I was asking is if they have any effect on
memory ordering. Some architectures have LL/SC imply memory ordering,
most do not.

Going by your spinlock implementation they don't imply any memory
ordering.

> > The mandated semantics for xchg() / cmpxchg() is an effective smp_mb()
> > before _and_ after.
> 
> 	switch (size) {						\
> 	case 4:							\
> 		smp_mb();					\
> 		asm volatile (					\
> 		"1:	ldex.w		%0, (%3) \n"		\
> 		"	mov		%1, %2   \n"		\
> 		"	stex.w		%1, (%3) \n"		\
> 		"	bez		%1, 1b   \n"		\
> 			: "=&r" (__ret), "=&r" (tmp)		\
> 			: "r" (__new), "r"(__ptr)		\
> 			: "memory");				\
> 		smp_mb();					\
> 		break;						\
> Hmm?
> But I couldn't undertand what's wrong without the 1th smp_mb()?
> 1th smp_mb will make all ld/st finish before ldex.w. Is it necessary?

Yes.

	CPU0			CPU1

	r1 = READ_ONCE(x);	WRITE_ONCE(y, 1);
	r2 = xchg(&y, 2);	smp_store_release(&x, 1);

must not allow: r1==1 && r2==0

> > The above implementation suggests LDEX implies a SYNC.IS, is this
> > correct?
> No, ldex doesn't imply a sync.is.

Right, as per the spinlock emails, then your proposed primitives are
incorrect.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ