[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46C239F0.3030004@redhat.com>
Date: Tue, 14 Aug 2007 19:25:36 -0400
From: Chris Snook <csnook@...hat.com>
To: Christoph Lameter <clameter@....com>
CC: linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
akpm@...ux-foundation.org, paulmck@...ux.vnet.ibm.com,
Segher Boessenkool <segher@...nel.crashing.org>,
"Luck, Tony" <tony.luck@...el.com>,
Chris Friesen <cfriesen@...tel.com>,
"Robert P. J. Day" <rpjday@...dspring.com>
Subject: Re: [PATCH 1/23] document preferred use of volatile with atomic_t
Christoph Lameter wrote:
> On Tue, 14 Aug 2007, Chris Snook wrote:
>
>>> volatile means that there is some vague notion of "read it now". But that
>>> really does not exist. Instead we control visibility via barriers (smp_wmb,
>>> smp_rmb). Would it not be best to not have volatile at all in atomic
>>> operations and let the barriers do the work?
>> From my reply in the other thread...
>>
>> But barriers force a flush of *everything* in scope, which we generally don't
>> want. On the other hand, we pretty much always want to flush atomic_*
>> operations. One way or another, we should be restricting the volatile
>> behavior to the thing that needs it. On most architectures, this patch set
>> just moves that from the declaration, where it is considered harmful, to the
>> use, where it is considered an occasional necessary evil.
>>
>> If you really, *really* distrust the compiler that much, you shouldn't be
>> using barrier, since that uses volatile under the hood too. You should just
>> go ahead and implement the atomic operations in assembler, like Segher
>> Boessenkool did for powerpc in response to my previous patchset.
>
> From my reply on the other thread:
>
> Maybe we need two read functions? One volatile, one not?
If we're going to do this, and I don't think we need to, I'd prefer that
atomic_read() be volatile, and something like atomic_read_opt() be non-volatile,
to discourage premature optimization.
> The atomic_read()s that I have in slub really do not care about when the
> variables are read. And if volatile creates overhead then I rather not have it.
A single volatile access is no more expensive than a non-volatile access. It's
when you have dependencies that you start to see overhead. If you're doing a
bunch of atomic operations on the same atomic_t in quick succession, then you
will see some overhead. Of course, if you're doing that, I think you have a
design problem.
On modern, register-rich CPUs with cache latencies of a couple clock cycles,
volatile generally isn't as much of a performance hit as it used to be. I think
that going out of your way to avoid it would be premature optimization on modern
hardware.
-- Chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists