lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 14 Aug 2007 15:34:25 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	paulmck@...ux.vnet.ibm.com
CC:	Herbert Xu <herbert@...dor.apana.org.au>, csnook@...hat.com,
	dhowells@...hat.com, linux-kernel@...r.kernel.org,
	linux-arch@...r.kernel.org, torvalds@...ux-foundation.org,
	netdev@...r.kernel.org, akpm@...ux-foundation.org, ak@...e.de,
	heiko.carstens@...ibm.com, davem@...emloft.net,
	schwidefsky@...ibm.com, wensong@...ux-vs.org, horms@...ge.net.au,
	wjiang@...ilience.com, cfriesen@...tel.com, zlynx@....org,
	rpjday@...dspring.com, jesper.juhl@...il.com
Subject: Re: [PATCH 6/24] make atomic_read() behave consistently on frv

Paul E. McKenney wrote:
> On Mon, Aug 13, 2007 at 01:15:52PM +0800, Herbert Xu wrote:
> 
>>Paul E. McKenney <paulmck@...ux.vnet.ibm.com> wrote:
>>
>>>On Sat, Aug 11, 2007 at 08:54:46AM +0800, Herbert Xu wrote:
>>>
>>>>Chris Snook <csnook@...hat.com> wrote:
>>>>
>>>>>cpu_relax() contains a barrier, so it should do the right thing.  For 
>>>>>non-smp architectures, I'm concerned about interacting with interrupt 
>>>>>handlers.  Some drivers do use atomic_* operations.
>>>>
>>>>What problems with interrupt handlers? Access to int/long must
>>>>be atomic or we're in big trouble anyway.
>>>
>>>Reordering due to compiler optimizations.  CPU reordering does not
>>>affect interactions with interrupt handlers on a given CPU, but
>>>reordering due to compiler code-movement optimization does.  Since
>>>volatile can in some cases suppress code-movement optimizations,
>>>it can affect interactions with interrupt handlers.
>>
>>If such reordering matters, then you should use one of the
>>*mb macros or barrier() rather than relying on possibly
>>hidden volatile cast.
> 
> 
> If communicating among CPUs, sure.  However, when communicating between
> mainline and interrupt/NMI handlers on the same CPU, the barrier() and
> most expecially the *mb() macros are gross overkill.  So there really
> truly is a place for volatile -- not a large place, to be sure, but a
> place nonetheless.

I really would like all volatile users to go away and be replaced
by explicit barriers. It makes things nicer and more explicit... for
atomic_t type there probably aren't many optimisations that can be
made which volatile would disallow (in actual kernel code), but for
others (eg. bitops, maybe atomic ops in UP kernels), there would be.

Maybe it is the safe way to go, but it does obscure cases where there
is a real need for barriers.

Many atomic operations are allowed to be reordered between CPUs, so
I don't have a good idea for the rationale to order them within the
CPU (also loads and stores to long and ptr types are not ordered like
this, although we do consider those to be atomic operations too).

barrier() in a way is like enforcing sequential memory ordering
between process and interrupt context, wheras volatile is just
enforcing coherency of a single memory location (and as such is
cheaper).

What do you think of this crazy idea?

/* Enforce a compiler barrier for only operations to location X.
  * Call multiple times to provide an ordering between multiple
  * memory locations. Other memory operations can be assumed by
  * the compiler to remain unchanged and may be reordered
  */
#define order(x) asm volatile("" : "+m" (x))

-- 
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ