[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20171016161443.997701524@linuxfoundation.org>
Date: Mon, 16 Oct 2017 18:16:39 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Marc Zyngier <marc.zyngier@....com>,
Christoph Hellwig <hch@....de>
Subject: [PATCH 4.13 42/53] genirq/cpuhotplug: Add sanity check for effective affinity mask
4.13-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner <tglx@...utronix.de>
commit 60b09c51bb4fb46e2331fdbb39f91520f31d35f7 upstream.
The effective affinity mask handling has no safety net when the mask is not
updated by the interrupt chip or the mask contains offline CPUs.
If that happens the CPU unplug code fails to migrate interrupts.
Add sanity checks and emit a warning when the mask contains only offline
CPUs.
Fixes: 415fcf1a2293 ("genirq/cpuhotplug: Use effective affinity mask")
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: Marc Zyngier <marc.zyngier@....com>
Cc: Christoph Hellwig <hch@....de>
Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1710042208400.2406@nanos
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
kernel/irq/cpuhotplug.c | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
--- a/kernel/irq/cpuhotplug.c
+++ b/kernel/irq/cpuhotplug.c
@@ -18,8 +18,34 @@
static inline bool irq_needs_fixup(struct irq_data *d)
{
const struct cpumask *m = irq_data_get_effective_affinity_mask(d);
+ unsigned int cpu = smp_processor_id();
- return cpumask_test_cpu(smp_processor_id(), m);
+#ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK
+ /*
+ * The cpumask_empty() check is a workaround for interrupt chips,
+ * which do not implement effective affinity, but the architecture has
+ * enabled the config switch. Use the general affinity mask instead.
+ */
+ if (cpumask_empty(m))
+ m = irq_data_get_affinity_mask(d);
+
+ /*
+ * Sanity check. If the mask is not empty when excluding the outgoing
+ * CPU then it must contain at least one online CPU. The outgoing CPU
+ * has been removed from the online mask already.
+ */
+ if (cpumask_any_but(m, cpu) < nr_cpu_ids &&
+ cpumask_any_and(m, cpu_online_mask) >= nr_cpu_ids) {
+ /*
+ * If this happens then there was a missed IRQ fixup at some
+ * point. Warn about it and enforce fixup.
+ */
+ pr_warn("Eff. affinity %*pbl of IRQ %u contains only offline CPUs after offlining CPU %u\n",
+ cpumask_pr_args(m), d->irq, cpu);
+ return true;
+ }
+#endif
+ return cpumask_test_cpu(cpu, m);
}
static bool migrate_one_irq(struct irq_desc *desc)
Powered by blists - more mailing lists