[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D5C8019.70301@hitachi.com>
Date: Thu, 17 Feb 2011 10:55:37 +0900
From: Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>
To: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Cc: Will Newton <will.newton@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Will Simoneau <simoneau@....uri.edu>,
David Miller <davem@...emloft.net>, hpa@...or.com,
matt@...sole-pimps.org, peterz@...radead.org, jbaron@...hat.com,
mingo@...e.hu, tglx@...utronix.de, andi@...stfloor.org,
roland@...hat.com, rth@...hat.com, fweisbec@...il.com,
avi@...hat.com, sam@...nborg.org, ddaney@...iumnetworks.com,
michael@...erman.id.au, linux-kernel@...r.kernel.org,
vapier@...too.org, cmetcalf@...era.com, dhowells@...hat.com,
schwidefsky@...ibm.com, heiko.carstens@...ibm.com,
benh@...nel.crashing.org, 2nddept-manager@....hitachi.co.jp
Subject: Re: [PATCH 0/2] jump label: 2.6.38 updates
(2011/02/16 22:24), Mathieu Desnoyers wrote:
> * Will Newton (will.newton@...il.com) wrote:
>> On Wed, Feb 16, 2011 at 12:18 PM, Steven Rostedt <rostedt@...dmis.org> wrote:
>>> On Wed, 2011-02-16 at 10:15 +0000, Will Newton wrote:
>>>
>>>>> That's some really crippled hardware... it does seem like *any* loads
>>>>> from *any* address updated by an sc would have to be done with ll as
>>>>> well, else they may load stale values. One could work this into
>>>>> atomic_read(), but surely there are other places that are problems.
>>>>
>>>> I think it's actually ok, atomics have arch implemented accessors, as
>>>> do spinlocks and atomic bitops. Those are the only place we do sc so
>>>> we can make sure we always ll or invalidate manually.
>>>
>>> I'm curious, how is cmpxchg() implemented on this architecture? As there
>>> are several places in the kernel that uses this on regular variables
>>> without any "accessor" functions.
>>
>> We can invalidate the cache manually. The current cpu will see the new
>> value (post-cache invalidate) and the other cpus will see either the
>> old value or the new value depending on whether they read before or
>> after the invalidate, which is racy but I don't think it is
>> problematic. Unless I'm missing something...
>
> Assuming the invalidate is specific to a cache-line, I'm concerned about
> the failure of a scenario like the following:
>
> initially:
> foo = 0
> bar = 0
>
> CPU A CPU B
>
> xchg(&foo, 1);
> ll foo
> sc foo
>
> -> interrupt
>
> if (foo == 1)
> xchg(&bar, 1);
> ll bar
> sc bar
> invalidate bar
>
> lbar = bar;
> smp_mb()
> lfoo = foo;
> BUG_ON(lbar == 1 && lfoo == 0);
> invalidate foo
>
> It should be valid to expect that every time "bar" read by CPU B is 1,
> then "foo" is always worth 1. However, in this case, the lack of
> invalidate on foo is keeping the cacheline from reaching CPU B. There
> seems to be a problem with interrupts/NMIs coming right between sc and
> invalidate, as Ingo pointed out.
Hmm, I think that is miss-coding ll/sc.
If I understand correctly, usually cache invalidation should be done
right before storing value, as MSI protocol does.
(or, sc should atomically invalidate the cache line)
Thank you,
--
Masami HIRAMATSU
2nd Dept. Linux Technology Center
Hitachi, Ltd., Systems Development Laboratory
E-mail: masami.hiramatsu.pt@...achi.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists