[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120520052731.GA3864@zhy>
Date: Sun, 20 May 2012 13:27:31 +0800
From: Yong Zhang <yong.zhang0@...il.com>
To: Christophe Huriaux <c.huriaux@...il.com>
Cc: Uwe Kleine-K�nig
<u.kleine-koenig@...gutronix.de>, linux-rt-users@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>,
linux-kernel@...r.kernel.org
Subject: [PATCH] genirq: don't sync irq thread if current happen to be the
very irq thread
On Thu, May 10, 2012 at 03:17:17PM +0200, Christophe Huriaux wrote:
> 2012/5/9 Uwe Kleine-K?nig <u.kleine-koenig@...gutronix.de>:
> > If you enable CONFIG_KALLSYMS you get a more usable backtrace.
> > Alternatively you can use
> >
> > ? ? ? ?$CROSS_COMPILE-addr2line -e vmlinux 0xc000e90c
> >
> > to get the file and line that resulted in the code at that address.
> >
>
> Thanks, I was wondering which config option would enable that. The
> complete backtrace is much more usable :
Actually I don't think this is a -rt issue, you could also trigger this
warning with vanilla if you boot your kernel with 'threadirqs'.
Could you pleaes try the follow patch?
Thanks,
Yong
---
From: Yong Zhang <yong.zhang@...driver.com>
Date: Sun, 20 May 2012 12:56:46 +0800
Subject: [PATCH] genirq: don't sync irq thread if current happen to be the very irq thread
Christophe reported against -rt:
BUG: scheduling while atomic: irq/37-s3c-mci/253/0x00000102
Modules linked in:
[<c000e9fc>] (unwind_backtrace+0x0/0x12c) from [<c029b82c>] (__schedule+0x58/0x2c0)
[<c029b82c>] (__schedule+0x58/0x2c0) from [<c029bc10>] (schedule+0x8c/0xb0)
[<c029bc10>] (schedule+0x8c/0xb0) from [<c0055614>] (synchronize_irq+0xbc/0xd8)
[<c0055614>] (synchronize_irq+0xbc/0xd8) from [<c01db6b0>] (pio_tasklet+0x34/0x11c)
[<c01db6b0>] (pio_tasklet+0x34/0x11c) from [<c0024914>] (__tasklet_action+0x68/0x80)
[<c0024914>] (__tasklet_action+0x68/0x80) from [<c0024ca4>] (__do_softirq+0x88/0x130)
[<c0024ca4>] (__do_softirq+0x88/0x130) from [<c0024ef0>] (do_softirq+0x48/0x54)
[<c0024ef0>] (do_softirq+0x48/0x54) from [<c0025048>] (local_bh_enable+0x8c/0xc0)
[<c0025048>] (local_bh_enable+0x8c/0xc0) from [<c0054678>] (irq_forced_thread_fn+0x4c/0x54)
[<c0054678>] (irq_forced_thread_fn+0x4c/0x54) from [<c0054454>] (irq_thread+0xa0/0x1c0)
[<c0054454>] (irq_thread+0xa0/0x1c0) from [<c0038628>] (kthread+0x84/0x8c)
[<c0038628>] (kthread+0x84/0x8c) from [<c000a100>] (kernel_thread_exit+0x0/0x8)
Whe looking at this issue, I find that there is a typical deadlock
scenario with forced treaded irq,
irq_forced_thread_fn()
local_bh_enable();
do_softirq();
disable_irq();
synchronize_irq();
wait_event();
/*DEAD*/
Cure it by unsync if current happen to be the very irq thread.
Reported-by: Christophe Huriaux <c.huriaux@...il.com>
Signed-off-by: Yong Zhang <yong.zhang0@...il.com>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
---
kernel/irq/manage.c | 9 +++++++++
1 files changed, 9 insertions(+), 0 deletions(-)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 89a3ea8..d5b96e7 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -41,6 +41,7 @@ early_param("threadirqs", setup_forced_irqthreads);
void synchronize_irq(unsigned int irq)
{
struct irq_desc *desc = irq_to_desc(irq);
+ struct irqaction *action = desc->action;
bool inprogress;
if (!desc)
@@ -67,7 +68,15 @@ void synchronize_irq(unsigned int irq)
/*
* We made sure that no hardirq handler is running. Now verify
* that no threaded handlers are active.
+ * But for theaded irq, we don't sync if current happens to be
+ * the irq thread; otherwise we could deadlock.
*/
+ while (action) {
+ if (action->thread && action->thread == current)
+ return;
+ action = action->next;
+ }
+
wait_event(desc->wait_for_threads, !atomic_read(&desc->threads_active));
}
EXPORT_SYMBOL(synchronize_irq);
--
1.7.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists