[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180624142744.015048976@linuxfoundation.org>
Date: Sun, 24 Jun 2018 23:22:49 +0800
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Song Liu <songliubraving@...com>,
Joerg Roedel <jroedel@...e.de>,
Peter Zijlstra <peterz@...radead.org>,
Song Liu <liu.song.a23@...il.com>,
Dmitry Safonov <0x7f454c46@...il.com>,
Mike Travis <mike.travis@....com>,
Borislav Petkov <bp@...en8.de>,
Tariq Toukan <tariqt@...lanox.com>
Subject: [PATCH 4.17 57/70] genirq/generic_pending: Do not lose pending affinity update
4.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner <tglx@...utronix.de>
commit a33a5d2d16cb84bea8d5f5510f3a41aa48b5c467 upstream.
The generic pending interrupt mechanism moves interrupts from the interrupt
handler on the original target CPU to the new destination CPU. This is
required for x86 and ia64 due to the way the interrupt delivery and
acknowledge works if the interrupts are not remapped.
However that update can fail for various reasons. Some of them are valid
reasons to discard the pending update, but the case, when the previous move
has not been fully cleaned up is not a legit reason to fail.
Check the return value of irq_do_set_affinity() for -EBUSY, which indicates
a pending cleanup, and rearm the pending move in the irq dexcriptor so it's
tried again when the next interrupt arrives.
Fixes: 996c591227d9 ("x86/irq: Plug vector cleanup race")
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Tested-by: Song Liu <songliubraving@...com>
Cc: Joerg Roedel <jroedel@...e.de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Song Liu <liu.song.a23@...il.com>
Cc: Dmitry Safonov <0x7f454c46@...il.com>
Cc: stable@...r.kernel.org
Cc: Mike Travis <mike.travis@....com>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Tariq Toukan <tariqt@...lanox.com>
Link: https://lkml.kernel.org/r/20180604162224.386544292@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
kernel/irq/migration.c | 24 ++++++++++++++++++------
1 file changed, 18 insertions(+), 6 deletions(-)
--- a/kernel/irq/migration.c
+++ b/kernel/irq/migration.c
@@ -38,17 +38,18 @@ bool irq_fixup_move_pending(struct irq_d
void irq_move_masked_irq(struct irq_data *idata)
{
struct irq_desc *desc = irq_data_to_desc(idata);
- struct irq_chip *chip = desc->irq_data.chip;
+ struct irq_data *data = &desc->irq_data;
+ struct irq_chip *chip = data->chip;
- if (likely(!irqd_is_setaffinity_pending(&desc->irq_data)))
+ if (likely(!irqd_is_setaffinity_pending(data)))
return;
- irqd_clr_move_pending(&desc->irq_data);
+ irqd_clr_move_pending(data);
/*
* Paranoia: cpu-local interrupts shouldn't be calling in here anyway.
*/
- if (irqd_is_per_cpu(&desc->irq_data)) {
+ if (irqd_is_per_cpu(data)) {
WARN_ON(1);
return;
}
@@ -73,9 +74,20 @@ void irq_move_masked_irq(struct irq_data
* For correct operation this depends on the caller
* masking the irqs.
*/
- if (cpumask_any_and(desc->pending_mask, cpu_online_mask) < nr_cpu_ids)
- irq_do_set_affinity(&desc->irq_data, desc->pending_mask, false);
+ if (cpumask_any_and(desc->pending_mask, cpu_online_mask) < nr_cpu_ids) {
+ int ret;
+ ret = irq_do_set_affinity(data, desc->pending_mask, false);
+ /*
+ * If the there is a cleanup pending in the underlying
+ * vector management, reschedule the move for the next
+ * interrupt. Leave desc->pending_mask intact.
+ */
+ if (ret == -EBUSY) {
+ irqd_set_move_pending(data);
+ return;
+ }
+ }
cpumask_clear(desc->pending_mask);
}
Powered by blists - more mailing lists