[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251118143052.68778-3-frederic@kernel.org>
Date: Tue, 18 Nov 2025 15:30:52 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
Frederic Weisbecker <frederic@...nel.org>,
Marco Crivellari <marco.crivellari@...e.com>,
Waiman Long <llong@...hat.com>,
cgroups@...r.kernel.org
Subject: [PATCH 2/2] genirq: Remove cpumask availability check on kthread affinity setting
Failing to allocate the affinity mask of an irqdesc eventually fails
the whole irqdesc initialization. It is then guaranteed that the cpumask
is always available whenever the related IRQ objects are alive, such as
the kthread handler.
Therefore remove the superfluous check since it is merely just a
historical leftover. Get rid also of the comments above it that are
either obsolete or useless.
Suggested-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
---
kernel/irq/manage.c | 17 ++++-------------
1 file changed, 4 insertions(+), 13 deletions(-)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 76e2cbe21d1f..e9316cba311c 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1001,7 +1001,6 @@ static irqreturn_t irq_forced_secondary_handler(int irq, void *dev_id)
static void irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action)
{
cpumask_var_t mask;
- bool valid = false;
if (!test_and_clear_bit(IRQTF_AFFINITY, &action->thread_flags))
return;
@@ -1018,21 +1017,13 @@ static void irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *a
}
scoped_guard(raw_spinlock_irq, &desc->lock) {
- /*
- * This code is triggered unconditionally. Check the affinity
- * mask pointer. For CPU_MASK_OFFSTACK=n this is optimized out.
- */
- if (cpumask_available(desc->irq_common_data.affinity)) {
- const struct cpumask *m;
+ const struct cpumask *m;
- m = irq_data_get_effective_affinity_mask(&desc->irq_data);
- cpumask_copy(mask, m);
- valid = true;
- }
+ m = irq_data_get_effective_affinity_mask(&desc->irq_data);
+ cpumask_copy(mask, m);
}
- if (valid)
- set_cpus_allowed_ptr(current, mask);
+ set_cpus_allowed_ptr(current, mask);
free_cpumask_var(mask);
}
--
2.51.0
Powered by blists - more mailing lists