[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CCBD511.40607@kernel.org>
Date: Sat, 30 Oct 2010 01:19:29 -0700
From: Yinghai Lu <yinghai@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...e.hu>,
"H. Peter Anvin" <hpa@...or.com>
CC: Russ Anderson <rja@....com>,
Suresh Siddha <suresh.b.siddha@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: [PATCH] x86, uv: Fix uv with destroy_irq()
Russ found that:
| There is a regression that is causing a NULL pointer dereference
| in free_irte when shutting down xpc. git bisect narrowed it down
| to git commit d585d060b42bd36f6f0b23ff327d3b91f80c7139, which
| changed free_irte(). Reverse applying the patch fixes the problem.
and he bisected to
| commit d585d060b42bd36f6f0b23ff327d3b91f80c7139
| Author: Thomas Gleixner <tglx@...utronix.de>
|Date: Sun Oct 10 12:34:27 2010 +0200
|
| intr_remap: Simplify the code further
|
| Having irq_2_iommu in struct irq_cfg allows further simplifications.
We need to use irq_mapped() for every irq instead of intr_remapping_enabled
overall
Reported-by: Russ Anderson <rja@....com>
Bisected-by: Russ Anderson <rja@....com>
Tested-by: Russ Anderson <rja@....com>
Signed-off-by: Yinghai Lu <yinghai@...nel.org>
---
arch/x86/kernel/apic/io_apic.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux-2.6/arch/x86/kernel/apic/io_apic.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/apic/io_apic.c
+++ linux-2.6/arch/x86/kernel/apic/io_apic.c
@@ -3109,7 +3109,7 @@ void destroy_irq(unsigned int irq)
irq_set_status_flags(irq, IRQ_NOREQUEST|IRQ_NOPROBE);
- if (intr_remapping_enabled)
+ if (irq_remapped(get_irq_chip_data(irq)))
free_irte(irq);
raw_spin_lock_irqsave(&vector_lock, flags);
__clear_irq_vector(irq, cfg);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists