[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46BB0B4B.4070300@redhat.com>
Date: Thu, 09 Aug 2007 08:40:43 -0400
From: Chris Snook <csnook@...hat.com>
To: Michael Buesch <mb@...sch.de>
CC: Andi Kleen <andi@...stfloor.org>,
Heiko Carstens <heiko.carstens@...ibm.com>,
David Miller <davem@...emloft.net>, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
schwidefsky@...ibm.com, wensong@...ux-vs.org, horms@...ge.net.au,
torvalds@...l.org
Subject: Re: [patch] ipvs: force read of atomic_t in while loop
Michael Buesch wrote:
> On Thursday 09 August 2007 02:15:33 Andi Kleen wrote:
>> On Wed, Aug 08, 2007 at 05:08:44PM -0400, Chris Snook wrote:
>>> Heiko Carstens wrote:
>>>> On Wed, Aug 08, 2007 at 03:21:31AM -0700, David Miller wrote:
>>>>> From: Heiko Carstens <heiko.carstens@...ibm.com>
>>>>> Date: Wed, 8 Aug 2007 11:33:00 +0200
>>>>>
>>>>>> Just saw this while grepping for atomic_reads in a while loops.
>>>>>> Maybe we should re-add the volatile to atomic_t. Not sure.
>>>>> I think whatever the choice, it should be done consistently
>>>>> on every architecture.
>>>>>
>>>>> It's just asking for trouble if your arch does it differently from
>>>>> every other.
>>>> Well..currently it's i386/x86_64 and s390 which have no volatile
>>>> in atomic_t. And yes, of course I agree it should be consistent
>>>> across all architectures. But it isn't.
>>> Based on recent discussion, it's pretty clear that there's a lot of
>>> confusion about this. A lot of people (myself included, until I thought
>>> about it long and hard) will reasonably assume that calling
>>> atomic_read() will actually read the value from memory. Leaving out the
>>> volatile declaration seems like a pessimization to me. If you force
>>> people to use barrier() everywhere they're working with atomic_t, it
>>> will force re-reads of all the non-atomic data in use as well, which
>>> will cause more memory fetches of things that generally don't need
>>> barrier(). That and it's a bug waiting to happen.
>>>
>>> Andi -- your thoughts on the matter?
>> I also think readding volatile makes sense. An alternative would be
>> to stick an rmb() into atomic_read() -- that would also stop speculative reads.
>> Disadvantage is that it clobbers all memory, not just the specific value.
>>
>> But you really have to complain to Linus (cc'ed). He came up
>> with the volatile removale change iirc.
>
> Isn't it possible through some inline assembly trick
> that only a certain variable has to be reloaded?
> So we could define something like that:
>
> #define reload_var(x) __asm__ __volatile__ (whatever, x)
>
> I don't know inline assembly that much, but isn't it possible
> with that to kind of "fake-touch" the variable, so the compiler
> must reload it (and only it) to make sure it's up to date?
We can do it in C, like this:
-#define atomic_read(v) ((v)->counter)
+#define atomic_read(v) (*(volatile int *)&(v)->counter)
By casting it volatile at the precise piece of code where we want to guarantee a
read from memory, there's little risk of the compiler getting creative in its
interpretation of the code.
Stay tuned for the patch set...
-- Chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists