[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <18117.28409.943948.355586@cargo.ozlabs.ibm.com>
Date: Fri, 17 Aug 2007 19:48:41 +1000
From: Paul Mackerras <paulus@...ba.org>
To: Satyam Sharma <satyam@...radead.org>
Cc: Nick Piggin <nickpiggin@...oo.com.au>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Segher Boessenkool <segher@...nel.crashing.org>,
heiko.carstens@...ibm.com, horms@...ge.net.au,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
rpjday@...dspring.com, ak@...e.de, netdev@...r.kernel.org,
cfriesen@...tel.com, Andrew Morton <akpm@...ux-foundation.org>,
jesper.juhl@...il.com, linux-arch@...r.kernel.org, zlynx@....org,
clameter@....com, schwidefsky@...ibm.com,
Chris Snook <csnook@...hat.com>,
Herbert Xu <herbert.xu@...hat.com>, davem@...emloft.net,
wensong@...ux-vs.org, wjiang@...ilience.com
Subject: Re: [PATCH 0/24] make atomic_read() behave consistently across all
architectures
Satyam Sharma writes:
> I wonder if this'll generate smaller and better code than _both_ the
> other atomic_read_volatile() variants. Would need to build allyesconfig
> on lots of diff arch's etc to test the theory though.
I'm sure it would be a tiny effect.
This whole thread is arguing about effects that are quite
insignificant. On the one hand we have the non-volatile proponents,
who want to let the compiler do extra optimizations - which amounts to
letting it elide maybe a dozen loads in the whole kernel, loads which
would almost always be L1 cache hits.
On the other hand we have the volatile proponents, who are concerned
that some code somewhere in the kernel might be buggy without the
volatile behaviour, and who also want to be able to remove some
barriers and thus save a few bytes of code and a few loads here and
there (and possibly some stores too).
Either way the effect on code size and execution time is miniscule.
In the end the strongest argument is actually that gcc generates
unnecessarily verbose code on x86[-64] for volatile accesses. Even
then we're only talking about ~2000 bytes, or less than 1 byte per
instance of atomic_read on average, about 0.06% of the kernel text
size.
The x86[-64] developers seem to be willing to bear the debugging cost
involved in having the non-volatile behaviour for atomic_read, in
order to save the 2kB. That's fine with me. Either way I think
somebody should audit all the uses of atomic_read, not just for
missing barriers, but also to find the places where it's used in a
racy manner. Then we can work out where the races matter and fix them
if they do.
Paul.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists