[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191017101938.321393687@linutronix.de>
Date: Thu, 17 Oct 2019 12:19:01 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: LKML <linux-kernel@...r.kernel.org>
Cc: x86@...nel.org, Sebastian Siewior <bigeasy@...utronix.de>,
Andy Shevchenko <andy.shevchenko@...il.com>
Subject: [patch 1/2] x86/ioapic: Prevent inconsistent state when moving an
interrupt
There is an issue with threaded interrupts which are marked ONESHOT
and using the fasteoi handler.
if (IS_ONESHOT())
mask_irq();
....
cond_unmask_eoi_irq()
chip->irq_eoi();
if (setaffinity_pending) {
mask_ioapic();
...
move_affinity();
unmask_ioapic();
}
So if setaffinity is pending the interrupt will be moved and then
unconditionally unmasked at the ioapic level, which is wrong in two
aspects:
1) It should be kept masked up to the point where the threaded handler
finished.
2) The physical chip state and the software masked state are inconsistent
Guard both the mask and the unmask with a check for the software masked
state. If the line is marked masked then the ioapic line is also masked, so
both mask_ioapic() and unmask_ioapic() can be skipped safely.
Fixes: 3aa551c9b4c4 ("genirq: add threaded interrupt handler support")
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
---
arch/x86/kernel/apic/io_apic.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -1729,7 +1729,8 @@ static inline bool ioapic_irqd_mask(stru
{
/* If we are moving the irq we need to mask it */
if (unlikely(irqd_is_setaffinity_pending(data))) {
- mask_ioapic_irq(data);
+ if (!irqd_irq_masked(data))
+ mask_ioapic_irq(data);
return true;
}
return false;
@@ -1766,7 +1767,9 @@ static inline void ioapic_irqd_unmask(st
*/
if (!io_apic_level_ack_pending(data->chip_data))
irq_move_masked_irq(data);
- unmask_ioapic_irq(data);
+ /* If the irq is masked in the core, leave it */
+ if (!irqd_irq_masked(data))
+ unmask_ioapic_irq(data);
}
}
#else
Powered by blists - more mailing lists