[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <18117.6495.397597.582736@cargo.ozlabs.ibm.com>
Date: Fri, 17 Aug 2007 13:43:27 +1000
From: Paul Mackerras <paulus@...ba.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Christoph Lameter <clameter@....com>,
Chris Snook <csnook@...hat.com>,
Ilpo J?rvinen <ilpo.jarvinen@...sinki.fi>,
Herbert Xu <herbert@...dor.apana.org.au>,
Satyam Sharma <satyam@...radead.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Stefan Richter <stefanr@...6.in-berlin.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-arch@...r.kernel.org, Netdev <netdev@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>, ak@...e.de,
heiko.carstens@...ibm.com, David Miller <davem@...emloft.net>,
schwidefsky@...ibm.com, wensong@...ux-vs.org, horms@...ge.net.au,
wjiang@...ilience.com, cfriesen@...tel.com, zlynx@....org,
rpjday@...dspring.com, jesper.juhl@...il.com,
segher@...nel.crashing.org
Subject: Re: [PATCH 0/24] make atomic_read() behave consistently across all
architectures
Linus Torvalds writes:
> In general, I'd *much* rather we used barriers. Anything that "depends" on
> volatile is pretty much set up to be buggy. But I'm certainly also willing
> to have that volatile inside "atomic_read/atomic_set()" if it avoids code
> that would otherwise break - ie if it hides a bug.
The cost of doing so seems to me to be well down in the noise - 44
bytes of extra kernel text on a ppc64 G5 config, and I don't believe
the extra few cycles for the occasional extra load would be measurable
(they should all hit in the L1 dcache). I don't mind if x86[-64] have
atomic_read/set be nonvolatile and find all the missing barriers, but
for now on powerpc, I think that not having to find those missing
barriers is worth the 0.00076% increase in kernel text size.
Paul.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists