[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BLU0-SMTP5EDE1BB3410CC9856693096D20@phx.gbl>
Date: Wed, 16 Feb 2011 08:24:38 -0500
From: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To: Will Newton <will.newton@...il.com>
CC: Steven Rostedt <rostedt@...dmis.org>,
Will Simoneau <simoneau@....uri.edu>,
David Miller <davem@...emloft.net>, hpa@...or.com,
matt@...sole-pimps.org, peterz@...radead.org, jbaron@...hat.com,
mingo@...e.hu, tglx@...utronix.de, andi@...stfloor.org,
roland@...hat.com, rth@...hat.com, masami.hiramatsu.pt@...achi.com,
fweisbec@...il.com, avi@...hat.com, sam@...nborg.org,
ddaney@...iumnetworks.com, michael@...erman.id.au,
linux-kernel@...r.kernel.org, vapier@...too.org,
cmetcalf@...era.com, dhowells@...hat.com, schwidefsky@...ibm.com,
heiko.carstens@...ibm.com, benh@...nel.crashing.org
Subject: Re: [PATCH 0/2] jump label: 2.6.38 updates
* Will Newton (will.newton@...il.com) wrote:
> On Wed, Feb 16, 2011 at 12:18 PM, Steven Rostedt <rostedt@...dmis.org> wrote:
> > On Wed, 2011-02-16 at 10:15 +0000, Will Newton wrote:
> >
> >> > That's some really crippled hardware... it does seem like *any* loads
> >> > from *any* address updated by an sc would have to be done with ll as
> >> > well, else they may load stale values. One could work this into
> >> > atomic_read(), but surely there are other places that are problems.
> >>
> >> I think it's actually ok, atomics have arch implemented accessors, as
> >> do spinlocks and atomic bitops. Those are the only place we do sc so
> >> we can make sure we always ll or invalidate manually.
> >
> > I'm curious, how is cmpxchg() implemented on this architecture? As there
> > are several places in the kernel that uses this on regular variables
> > without any "accessor" functions.
>
> We can invalidate the cache manually. The current cpu will see the new
> value (post-cache invalidate) and the other cpus will see either the
> old value or the new value depending on whether they read before or
> after the invalidate, which is racy but I don't think it is
> problematic. Unless I'm missing something...
Assuming the invalidate is specific to a cache-line, I'm concerned about
the failure of a scenario like the following:
initially:
foo = 0
bar = 0
CPU A CPU B
xchg(&foo, 1);
ll foo
sc foo
-> interrupt
if (foo == 1)
xchg(&bar, 1);
ll bar
sc bar
invalidate bar
lbar = bar;
smp_mb()
lfoo = foo;
BUG_ON(lbar == 1 && lfoo == 0);
invalidate foo
It should be valid to expect that every time "bar" read by CPU B is 1,
then "foo" is always worth 1. However, in this case, the lack of
invalidate on foo is keeping the cacheline from reaching CPU B. There
seems to be a problem with interrupts/NMIs coming right between sc and
invalidate, as Ingo pointed out.
Thanks,
Mathieu
--
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists