lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 22 Jun 2017 10:01:53 -0700
From:   tip-bot for Thomas Gleixner <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     mingo@...nel.org, mpe@...erman.id.au, keith.busch@...el.com,
        peterz@...radead.org, hch@....de, tglx@...utronix.de,
        linux-kernel@...r.kernel.org, axboe@...nel.dk,
        marc.zyngier@....com, hpa@...or.com
Subject: [tip:irq/core] genirq/cpuhotplug: Use effective affinity mask

Commit-ID:  415fcf1a2293046e0c1f4ab8558a87bad66652b1
Gitweb:     http://git.kernel.org/tip/415fcf1a2293046e0c1f4ab8558a87bad66652b1
Author:     Thomas Gleixner <tglx@...utronix.de>
AuthorDate: Tue, 20 Jun 2017 01:37:39 +0200
Committer:  Thomas Gleixner <tglx@...utronix.de>
CommitDate: Thu, 22 Jun 2017 18:21:21 +0200

genirq/cpuhotplug: Use effective affinity mask

If the architecture supports the effective affinity mask, migrating
interrupts away which are not targeted by the effective mask is
pointless.

They can stay in the user or system supplied affinity mask, but won't be
targetted at any given point as the affinity setter functions need to
validate against the online cpu mask anyway.

Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: Jens Axboe <axboe@...nel.dk>
Cc: Marc Zyngier <marc.zyngier@....com>
Cc: Michael Ellerman <mpe@...erman.id.au>
Cc: Keith Busch <keith.busch@...el.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Christoph Hellwig <hch@....de>
Link: http://lkml.kernel.org/r/20170619235446.328488490@linutronix.de

---
 kernel/irq/cpuhotplug.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c
index e09cb91..0b093db 100644
--- a/kernel/irq/cpuhotplug.c
+++ b/kernel/irq/cpuhotplug.c
@@ -14,6 +14,14 @@
 
 #include "internals.h"
 
+/* For !GENERIC_IRQ_EFFECTIVE_AFF_MASK this looks at general affinity mask */
+static inline bool irq_needs_fixup(struct irq_data *d)
+{
+	const struct cpumask *m = irq_data_get_effective_affinity_mask(d);
+
+	return cpumask_test_cpu(smp_processor_id(), m);
+}
+
 static bool migrate_one_irq(struct irq_desc *desc)
 {
 	struct irq_data *d = irq_desc_get_irq_data(desc);
@@ -42,9 +50,7 @@ static bool migrate_one_irq(struct irq_desc *desc)
 	 * Note: Do not check desc->action as this might be a chained
 	 * interrupt.
 	 */
-	affinity = irq_data_get_affinity_mask(d);
-	if (irqd_is_per_cpu(d) || !irqd_is_started(d) ||
-	    !cpumask_test_cpu(smp_processor_id(), affinity)) {
+	if (irqd_is_per_cpu(d) || !irqd_is_started(d) || !irq_needs_fixup(d)) {
 		/*
 		 * If an irq move is pending, abort it if the dying CPU is
 		 * the sole target.
@@ -69,6 +75,8 @@ static bool migrate_one_irq(struct irq_desc *desc)
 	 */
 	if (irq_fixup_move_pending(desc, true))
 		affinity = irq_desc_get_pending_mask(desc);
+	else
+		affinity = irq_data_get_affinity_mask(d);
 
 	/* Mask the chip for interrupts which cannot move in process context */
 	if (maskchip && chip->irq_mask)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ