[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.1002010701270.4206@localhost.localdomain>
Date: Mon, 1 Feb 2010 07:22:01 -0800 (PST)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
cc: akpm@...ux-foundation.org, Ingo Molnar <mingo@...e.hu>,
linux-kernel@...r.kernel.org,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Steven Rostedt <rostedt@...dmis.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Nicholas Miell <nmiell@...cast.net>, laijs@...fujitsu.com,
dipankar@...ibm.com, josh@...htriplett.org, dvhltc@...ibm.com,
niv@...ibm.com, tglx@...utronix.de, peterz@...radead.org,
Valdis.Kletnieks@...edu, dhowells@...hat.com
Subject: Re: [patch 1/3] Create spin lock/spin unlock with distinct memory
barrier
On Sun, 31 Jan 2010, Mathieu Desnoyers wrote:
>
> Create the primitive family:
>
> spin_lock__no_acquire
> spin_unlock__no_release
> spin_lock_irq__no_acquire
> spin_unlock_irq__no_release
I really really hate this patch.
Locking is really hard to get right anyway, you already had one bug in
this, and on the major architectures, none of this will matter _anyway_,
since you cannot split the acquire from the lock and the release from the
unlock.
So the only operation that actually makes any sense at all would be
"smp_mb__after_spin_lock" (and no new spinlock primitives at all), since
it can be optimized away on x86 (and then add "smp_mb__before_spin_unlock"
just to make them symmetric). But even that one is very dubious:
"The first user of these primitives is the scheduler code, where a full
memory barrier is wanted before and after runqueue data structure
modifications so these can be read safely by sys_membarrier without
holding the rq lock."
what kind of crazy model is this? That sounds insane. Locking together
with some new random optimistic usage that we don't even know how
performance-critical it is? Mixing locking and non-locking is simply
wrong. Why would it be any safer to read whatever that the lock protects
by adding smp barriers at the lock?
If you need other smp barriers at the lock, then what about the non-locked
accesses _while_ the lock is held? You get no ordering guarantees there.
The whole thing sounds highly dubious.
And all of this for something that is a new system call that nobody
actually uses? To optimize the new and experimental path with some insane
lockfree model, while making the core kernel more complex? A _very_
strong NAK from me.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists