[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <18115.62741.807704.969977@cargo.ozlabs.ibm.com>
Date: Thu, 16 Aug 2007 16:56:21 +1000
From: Paul Mackerras <paulus@...ba.org>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: Satyam Sharma <satyam@...radead.org>,
Christoph Lameter <clameter@....com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Stefan Richter <stefanr@...6.in-berlin.de>,
Chris Snook <csnook@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-arch@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
netdev@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>,
ak@...e.de, heiko.carstens@...ibm.com, davem@...emloft.net,
schwidefsky@...ibm.com, wensong@...ux-vs.org, horms@...ge.net.au,
wjiang@...ilience.com, cfriesen@...tel.com, zlynx@....org,
rpjday@...dspring.com, jesper.juhl@...il.com,
segher@...nel.crashing.org
Subject: Re: [PATCH 0/24] make atomic_read() behave consistently across all architectures
Herbert Xu writes:
> On Thu, Aug 16, 2007 at 02:11:43PM +1000, Paul Mackerras wrote:
> >
> > The uses of atomic_read where one might want it to allow caching of
> > the result seem to me to fall into 3 categories:
> >
> > 1. Places that are buggy because of a race arising from the way it's
> > used.
> >
> > 2. Places where there is a race but it doesn't matter because we're
> > doing some clever trick.
> >
> > 3. Places where there is some locking in place that eliminates any
> > potential race.
>
> Agreed.
>
> > In case 1, adding volatile won't solve the race, of course, but it's
> > hard to argue that we shouldn't do something because it will slow down
> > buggy code. Case 2 is hopefully pretty rare and accompanied by large
> > comment blocks, and in those cases caching the result of atomic_read
> > explicitly in a local variable would probably make the code clearer.
> > And in case 3 there is no reason to use atomic_t at all; we might as
> > well just use an int.
>
> Since adding volatile doesn't help any of the 3 cases, and
> takes away optimisations from both 2 and 3, I wonder what
> is the point of the addition after all?
Note that I said these are the cases _where one might want to allow
caching_, so of course adding volatile doesn't help _these_ cases.
There are of course other cases where one definitely doesn't want to
allow the compiler to cache the value, such as when polling an atomic
variable waiting for another CPU to change it, and from my inspection
so far these cases seem to be the majority.
The reasons for having "volatile" behaviour of atomic_read (whether or
not that is achieved by use of the "volatile" C keyword) are
- It matches the normal expectation based on the name "atomic_read"
- It matches the behaviour of the other atomic_* primitives
- It avoids bugs in the cases where "volatile" behaviour is required
To my mind these outweigh the small benefit for some code of the
non-volatile (caching-allowed) behaviour. In fact it's pretty minor
either way, and since x86[-64] has this behaviour, one can expect the
potential bugs in generic code to have mostly been found, although
perhaps not all of them since x86[-64] has less aggressive reordering
of memory accesses and fewer registers in which to cache things than
some other architectures.
Paul.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists