lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240419013322.58500-2-dongli.zhang@oracle.com>
Date: Thu, 18 Apr 2024 18:33:22 -0700
From: Dongli Zhang <dongli.zhang@...cle.com>
To: linux-kernel@...r.kernel.org
Cc: virtualization@...ts.linux.dev, tglx@...utronix.de, joe.jin@...cle.com
Subject: [PATCH 1/1] genirq/cpuhotplug: retry with online CPUs on irq_do_set_affinity failure

When a CPU is offline, its IRQs may migrate to other CPUs. For managed
IRQs, they are migrated, or shutdown (if all CPUs of the managed IRQ
affinity are offline). For regular IRQs, there will only be a migration.

The migrate_one_irq() first uses pending_mask or affinity_mask of the IRQ.

104         if (irq_fixup_move_pending(desc, true))
105                 affinity = irq_desc_get_pending_mask(desc);
106         else
107                 affinity = irq_data_get_affinity_mask(d);

The migrate_one_irq() may use all online CPUs, if all CPUs in
pending_mask/affinity_mask are already offline.

113         if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {
114                 /*
115                  * If the interrupt is managed, then shut it down and leave
116                  * the affinity untouched.
117                  */
118                 if (irqd_affinity_is_managed(d)) {
119                         irqd_set_managed_shutdown(d);
120                         irq_shutdown_and_deactivate(desc);
121                         return false;
122                 }
123                 affinity = cpu_online_mask;
124                 brokeaff = true;
125         }

However, there is a corner case. Although some CPUs in
pending_mask/affinity_mask are still online, they are lack of available
vectors. If the kernel continues calling irq_do_set_affinity() with those CPUs,
there will be -ENOSPC error.

This is not reasonable as other online CPUs still have many available vectors.

name:   VECTOR
 size:   0
 mapped: 529
 flags:  0x00000103
Online bitmaps:        7
Global available:    884
Global reserved:       6
Total allocated:     539
System: 36: 0-19,21,50,128,236,243-244,246-255
 | CPU | avl | man | mac | act | vectors
     0   147     0     0   55  32-49,51-87
     1   147     0     0   55  32-49,51-87
     2     0     0     0  202  32-49,51-127,129-235
     4   147     0     0   55  32-49,51-87
     5   147     0     0   55  32-49,51-87
     6   148     0     0   54  32-49,51-86
     7   148     0     0   54  32-49,51-86

This issue should not happen for managed IRQs because the vectors are already
reserved before CPU hotplug. For regular IRQs, do a re-try with all online
CPUs if the prior irq_do_set_affinity() is failed with -ENOSPC.

Cc: Joe Jin <joe.jin@...cle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@...cle.com>
---
 kernel/irq/cpuhotplug.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c
index 1ed2b1739363..d1666a6b73f4 100644
--- a/kernel/irq/cpuhotplug.c
+++ b/kernel/irq/cpuhotplug.c
@@ -130,6 +130,19 @@ static bool migrate_one_irq(struct irq_desc *desc)
 	 * CPU.
 	 */
 	err = irq_do_set_affinity(d, affinity, false);
+
+	if (err == -ENOSPC &&
+	    !irqd_affinity_is_managed(d) &&
+	    affinity != cpu_online_mask) {
+		affinity = cpu_online_mask;
+		brokeaff = true;
+
+		pr_debug("IRQ%u: set affinity failed for %*pbl, re-try with all online CPUs\n",
+			 d->irq, cpumask_pr_args(affinity));
+
+		err = irq_do_set_affinity(d, affinity, false);
+	}
+
 	if (err) {
 		pr_warn_ratelimited("IRQ%u: set affinity failed(%d).\n",
 				    d->irq, err);
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ