[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.1.10.0805141136340.3019@woody.linux-foundation.org>
Date: Wed, 14 May 2008 11:41:01 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Ingo Molnar <mingo@...e.hu>
cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Thomas Gleixner <tglx@...utronix.de>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
Alexander Viro <viro@....linux.org.uk>
Subject: Re: [announce] "kill the Big Kernel Lock (BKL)" tree
On Wed, 14 May 2008, Ingo Molnar wrote:
>
> Linus, Alan: the increased visibility and debuggability of the BKL
> already uncovered a rather serious regression in upstream -git. You
> might want to cherry pick this single fix, it will apply just fine to
> current -git:
Ok, so I'm obviously happy. This is exactly the kind of thing I would want
to see.
That said, the way it is now set up, it's unreasonable to merge anything
directly, and while I can cherry-pick obvious fixes this way, I do think
we could do things better.
It should be possible to set things up so that it's a config option, and
we can mark it EXPERIMENTAL but still merge it into the standard kernel,
so that we'd have the debug stuff there. That would get a lot more
coverage, especially if it all still *works*, even if the debug stuff then
complains (ie it would be nicer if the lock itself didn't start breaking).
So for example, have CONFIG_DEBUG_BKL turn it into a mutex (and select
mutex debugging), and get all the debug coverage that way, but then when
somebody enters the scheduler with the lock held, first complain, but then
auto-release it anyway. That way, bugs get found and complained about, but
hopefully the machine still ends up working.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists