lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877c2jk5ka.ffs@tglx>
Date: Wed, 14 May 2025 09:35:49 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Brian Norris <briannorris@...omium.org>
Cc: Douglas Anderson <dianders@...omium.org>, Tsai Sung-Fu
 <danielsftsai@...gle.com>, linux-kernel@...r.kernel.org, Brian Norris
 <briannorris@...omium.org>
Subject: Re: [PATCH 2/2] genirq: Retain disable depth across irq
 shutdown/startup

On Tue, May 13 2025 at 15:42, Brian Norris wrote:
> If an IRQ is shut down and restarted while it was already disabled, its
> depth is clobbered and reset to 0. This can produce unexpected results,
> as:
> 1) the consuming driver probably expected it to stay disabled and
> 2) the kernel starts complaining about "Unbalanced enable for IRQ N" the
>    next time the consumer calls enable_irq()
>
> This problem can occur especially for affinity-managed IRQs that are
> already disabled before CPU hotplug.

Groan.

> I'm not very confident this is a fully correct fix, as I'm not sure I've
> grokked all the startup/shutdown logic in the IRQ core. This probably
> serves better as an example method to pass the tests in patch 1.

It's close enough except for a subtle detail.

> @@ -272,7 +272,9 @@ int irq_startup(struct irq_desc *desc, bool resend, bool force)
>  	const struct cpumask *aff = irq_data_get_affinity_mask(d);
>  	int ret = 0;
>  
> -	desc->depth = 0;
> +	desc->depth--;
> +	if (desc->depth)
> +		return 0;

This breaks a

     request_irq()
     disable_irq()
     free_irq()
     request_irq()

sequence.
  
So the only case where the disable depth needs to be preserved is for
managed interrupts in the hotunplug -> shutdown -> hotplug -> startup
scenario. Making that explicit avoids chasing all other places and
sprinkle desc->depth = 1 into them. Something like the uncompiled below
should do the trick.

Thanks,

        tglx
---
diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index 36cf1b09cc84..b88e9d36d933 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -223,6 +223,19 @@ __irq_startup_managed(struct irq_desc *desc, const struct cpumask *aff,
 		return IRQ_STARTUP_ABORT;
 	return IRQ_STARTUP_MANAGED;
 }
+
+void irq_startup_managed(struct irq_desc *desc)
+{
+	/*
+	 * Only start it up when the disable depth is 1, so that a disable,
+	 * hotunplug, hotplug sequence does not end up enabling it during
+	 * hotplug unconditionally.
+	 */
+	desc->depth--;
+	if (!desc->depth)
+		irq_startup(desc, IRQ_RESEND, IRQ_START_COND);
+}
+
 #else
 static __always_inline int
 __irq_startup_managed(struct irq_desc *desc, const struct cpumask *aff,
@@ -290,6 +303,7 @@ int irq_startup(struct irq_desc *desc, bool resend, bool force)
 			ret = __irq_startup(desc);
 			break;
 		case IRQ_STARTUP_ABORT:
+			desc->depth = 1;
 			irqd_set_managed_shutdown(d);
 			return 0;
 		}
@@ -322,7 +336,13 @@ void irq_shutdown(struct irq_desc *desc)
 {
 	if (irqd_is_started(&desc->irq_data)) {
 		clear_irq_resend(desc);
-		desc->depth = 1;
+		/*
+		 * Increment disable depth, so that a managed shutdown on
+		 * CPU hotunplug preserves the actual disabled state when the
+		 * CPU comes back online. See irq_startup_managed().
+		 */
+		desc->depth++;
+
 		if (desc->irq_data.chip->irq_shutdown) {
 			desc->irq_data.chip->irq_shutdown(&desc->irq_data);
 			irq_state_set_disabled(desc);
diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c
index 15a7654eff68..3ed5b1592735 100644
--- a/kernel/irq/cpuhotplug.c
+++ b/kernel/irq/cpuhotplug.c
@@ -219,7 +219,7 @@ static void irq_restore_affinity_of_irq(struct irq_desc *desc, unsigned int cpu)
 		return;
 
 	if (irqd_is_managed_and_shutdown(data))
-		irq_startup(desc, IRQ_RESEND, IRQ_START_COND);
+		irq_startup_managed(desc);
 
 	/*
 	 * If the interrupt can only be directed to a single target
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index b0290849c395..8d2b3ac80ef3 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -87,6 +87,7 @@ extern void __enable_irq(struct irq_desc *desc);
 extern int irq_activate(struct irq_desc *desc);
 extern int irq_activate_and_startup(struct irq_desc *desc, bool resend);
 extern int irq_startup(struct irq_desc *desc, bool resend, bool force);
+void irq_startup_managed(struct irq_desc *desc);
 
 extern void irq_shutdown(struct irq_desc *desc);
 extern void irq_shutdown_and_deactivate(struct irq_desc *desc);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ