[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.1002010722350.4206@localhost.localdomain>
Date: Mon, 1 Feb 2010 07:27:16 -0800 (PST)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
cc: akpm@...ux-foundation.org, Ingo Molnar <mingo@...e.hu>,
linux-kernel@...r.kernel.org,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Steven Rostedt <rostedt@...dmis.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Nicholas Miell <nmiell@...cast.net>, laijs@...fujitsu.com,
dipankar@...ibm.com, josh@...htriplett.org, dvhltc@...ibm.com,
niv@...ibm.com, tglx@...utronix.de, peterz@...radead.org,
Valdis.Kletnieks@...edu, dhowells@...hat.com
Subject: Re: [patch 2/3] scheduler: add full memory barriers upon task switch
at runqueue lock/unlock
On Sun, 31 Jan 2010, Mathieu Desnoyers wrote:
>
> Adds no overhead on x86, because LOCK-prefixed atomic operations of the spin
> lock/unlock already imply a full memory barrier.
.. and as Nick pointed out, you're fundamentally incorrect on this.
unlock on x86 is no memory barrier at all, since the x86 memory ordering
rules are such that a regular store always has release consistency.
But more importantly, you don't even explain why the addded smp_mb()
helps.
Why does a smp_mb() at the lock/unlock even matter? Reading accesses by
the same CPU sure as hell do _not_ matter, so the whole concept seems
totally broken. There is no way in _hell_ that whatever unlocked thing
can ever write the variables protected by the lock, only read them. So a
full memory barrier makes zero sense to begin with.
So what are these magical memory barriers all about?
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists