[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0901061451200.3057@localhost.localdomain>
Date: Tue, 6 Jan 2009 14:56:23 -0800 (PST)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Peter Zijlstra <peterz@...radead.org>
cc: paulmck@...ux.vnet.ibm.com, Gregory Haskins <ghaskins@...ell.com>,
Ingo Molnar <mingo@...e.hu>, Matthew Wilcox <matthew@....cx>,
Andi Kleen <andi@...stfloor.org>,
Chris Mason <chris.mason@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-btrfs <linux-btrfs@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>,
Nick Piggin <npiggin@...e.de>,
Peter Morreale <pmorreale@...ell.com>,
Sven Dietrich <SDietrich@...ell.com>
Subject: Re: [PATCH][RFC]: mutex: adaptive spin
On Tue, 6 Jan 2009, Peter Zijlstra wrote:
> >
> > In fact, I suspect that's the real bug you're hitting: you're enabling
> > preemption while holding a spinlock. That is NOT a good idea.
>
> spinlocks also fiddle with preempt_count, that should all work out -
> although granted, it does look funny.
It most certainly doesn't always work out.
For example, the irq-disabling ones do *not* fiddle with preempt_count,
because they disable preemption by just disabling interrupts. So doing
preempt_enable() inside such a spinlock is almost guaranteed to lock up:
because the preempt_enable() will now potentially call the scheduler with
a spinlock held and with interrupts disabled.
That, in turn, can cause any number of problems - deadlocks with other
processes that then try to take the spinlock that didn't get released, but
also deadlocks with interrupts, since the scheduler will enable interrupts
again.
So mixing preemption and spinlocks is almost always a bug. Yes, _some_
cases work out ok, but I'd call those the odd ones.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists