[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87y0o0keci.ffs@tglx>
Date: Thu, 20 Nov 2025 14:45:01 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Marek Szyprowski <m.szyprowski@...sung.com>, Frederic Weisbecker
<frederic@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>, Marco Crivellari
<marco.crivellari@...e.com>, Waiman Long <llong@...hat.com>,
cgroups@...r.kernel.org
Subject: Re: [PATCH 1/2] genirq: Fix IRQ threads affinity VS cpuset isolated
partitions
On Thu, Nov 20 2025 at 12:51, Marek Szyprowski wrote:
> On 18.11.2025 15:30, Frederic Weisbecker wrote:
>> In the meantime, cpuset shouldn't fiddle with IRQ threads directly.
>> To prevent from that, set the PF_NO_SETAFFINITY flag to them.
>>
>> Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
>
> This patch landed in today's linux-next as commit 844dcacab287 ("genirq:
> Fix interrupt threads affinity vs. cpuset isolated partitions"). In my
> tests I found that it triggers a warnings on some of my test systems.
> This is example of such warning:
>
> ------------[ cut here ]------------
> WARNING: CPU: 0 PID: 1 at kernel/kthread.c:599 kthread_bind_mask+0x2c/0x84
> Modules linked in:
> CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted
> 6.18.0-rc1-00031-g844dcacab287 #16177 PREEMPT
> Hardware name: Samsung Exynos (Flattened Device Tree)
> Call trace:
> unwind_backtrace from show_stack+0x10/0x14
> show_stack from dump_stack_lvl+0x68/0x88
> dump_stack_lvl from __warn+0x80/0x1d0
> __warn from warn_slowpath_fmt+0x1b0/0x1bc
> warn_slowpath_fmt from kthread_bind_mask+0x2c/0x84
> kthread_bind_mask from wake_up_and_wait_for_irq_thread_ready+0x3c/0xd4
> wake_up_and_wait_for_irq_thread_ready from __setup_irq+0x3e8/0x894
Hmm. The only explaination for that is that the thread got woken up
already and left the initial UNINTERRUPTIBLE state and is now waiting
for an interrupt wakeup with INTERRUPTIBLE state.
To validate that theory, can you please apply the patch below? The extra
warning I added should trigger first.
Let me think about a proper cure...
Thanks,
tglx
---
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -615,6 +615,8 @@ static void __kthread_bind(struct task_s
void kthread_bind_mask(struct task_struct *p, const struct cpumask *mask)
{
struct kthread *kthread = to_kthread(p);
+
+ WARN_ON_ONCE(kthread->started);
__kthread_bind_mask(p, mask, TASK_UNINTERRUPTIBLE);
WARN_ON_ONCE(kthread->started);
}
@@ -631,6 +633,8 @@ void kthread_bind_mask(struct task_struc
void kthread_bind(struct task_struct *p, unsigned int cpu)
{
struct kthread *kthread = to_kthread(p);
+
+ WARN_ON_ONCE(kthread->started);
__kthread_bind(p, cpu, TASK_UNINTERRUPTIBLE);
WARN_ON_ONCE(kthread->started);
}
Powered by blists - more mailing lists