[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110608191758.GA12457@elte.hu>
Date: Wed, 8 Jun 2011 21:17:58 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Arne Jansen <lists@...-jansens.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
mingo@...hat.com, hpa@...or.com, linux-kernel@...r.kernel.org,
efault@....de, npiggin@...nel.dk, akpm@...ux-foundation.org,
frank.rowand@...sony.com, tglx@...utronix.de,
linux-tip-commits@...r.kernel.org
Subject: Re: [debug patch] printk: Add a printk killswitch to robustify NMI
watchdog messages
* Peter Zijlstra <peterz@...radead.org> wrote:
> I came up with the below hackery, seems to actually boot and such
> on a lockdep enabled kernel (although Ingo did report lockups with
> a partial version of the patch, still need to look at that).
>
> The idea is to use the console_sem.lock instead of the semaphore
> itself, we flush the console when console_sem.count > 0, which
> means its uncontended. Its more or less equivalent to
> down_trylock() + up(), except it never releases the sem internal
> lock, and optimizes the count fiddling away.
>
> It doesn't require a wakeup because any real semaphore contention
> will still be spinning on the spinlock instead of enqueueing itself
> on the waitlist.
>
> Its rather ugly, exposes semaphore internals in places it
> shouldn't, although we could of course expose some primitives for
> this, but then people might thing it'd be okay to use them etc..
>
> /me puts on the asbestos underwear
Hm, the no-wakeup aspect seems rather useful.
Could we perhaps remove console_sem and replace it with a mutex and
do something like this with a mutex and its ->wait_lock?
We'd have two happy side effects:
- we'd thus remove one of the last core kernel semaphore users
- we'd gain lockdep coverage for console locking as a bonus ...
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists