lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090526155906.GC26713@n2100.arm.linux.org.uk>
Date:	Tue, 26 May 2009 16:59:06 +0100
From:	Russell King - ARM Linux <linux@....linux.org.uk>
To:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Cc:	Jamie Lokier <jamie@...reable.org>,
	Catalin Marinas <catalin.marinas@....com>,
	linux-arm-kernel@...ts.arm.linux.org.uk,
	linux-kernel@...r.kernel.org
Subject: Re: Broken ARM atomic ops wrt memory barriers (was : [PATCH] Add
	cmpxchg support for ARMv6+ systems)

On Tue, May 26, 2009 at 11:36:54AM -0400, Mathieu Desnoyers wrote:
> > diff --git a/arch/arm/include/asm/system.h b/arch/arm/include/asm/system.h
> > index bd4dc8e..e9889c2 100644
> > --- a/arch/arm/include/asm/system.h
> > +++ b/arch/arm/include/asm/system.h
> > @@ -248,6 +248,8 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
> >  	unsigned int tmp;
> >  #endif
> >  
> > +	smp_mb();
> > +
> >  	switch (size) {
> >  #if __LINUX_ARM_ARCH__ >= 6
> >  	case 1:
> > @@ -258,7 +260,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
> >  		"	bne	1b"
> >  			: "=&r" (ret), "=&r" (tmp)
> >  			: "r" (x), "r" (ptr)
> > -			: "memory", "cc");
> > +			: "cc");
> 
> I would not remove the "memory" constraint in here. Anyway it's just a
> compiler barrier, I doubt it would make anyting faster (due to the
> following smp_mb() added), but it surely makes things harder to
> understand.

If you don't already know that smp_mb() is always a compiler barrier then
I guess it's true, but then if you don't know what the barriers are defined
to be, should you be trying to understand atomic ops?

> If we determine that some reorderings are not possible on ARM, we might
> eventually change rmb() for a simple barrier() and make atomic op
> barriers a bit more lightweight.

There seems to be no "read memory barrier" on ARM - the dmb instruction
can be restricted to writes, but not just reads.

Moreover, there's a comment in the architecture reference manual that
implementations which don't provide the other dmb variants must default
to doing the full dmb, but then goes on to say that programs should not
rely on this (what... that the relaxed dmb()s have any effect what so
ever?)

Suggest we don't go anywhere near those until ARMv7 stuff has matured
for a few years and these details resolved.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ