[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <70335ad4-59b6-45fd-8a76-bd91d9658810@linux.dev>
Date: Wed, 28 Jan 2026 14:33:32 -0800
From: Ihor Solodrai <ihor.solodrai@...ux.dev>
To: Thomas Gleixner <tglx@...nel.org>, Shrikanth Hegde
<sshegde@...ux.ibm.com>, Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>
Cc: Gabriele Monaco <gmonaco@...hat.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Michael Jeanson <mjeanson@...icios.com>, Jens Axboe <axboe@...nel.dk>,
"Paul E. McKenney" <paulmck@...nel.org>,
"Gautham R. Shenoy" <gautham.shenoy@....com>,
Florian Weimer <fweimer@...hat.com>, Tim Chen <tim.c.chen@...el.com>,
Yury Norov <yury.norov@...il.com>, bpf <bpf@...r.kernel.org>,
sched-ext@...ts.linux.dev, Kernel Team <kernel-team@...a.com>,
Alexei Starovoitov <ast@...nel.org>, Andrii Nakryiko <andrii@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, Puranjay Mohan
<puranjay@...nel.org>, Tejun Heo <tj@...nel.org>
Subject: Re: [patch V5 00/20] sched: Rewrite MM CID management
On 1/28/26 2:24 PM, Thomas Gleixner wrote:
> On Wed, Jan 28 2026 at 14:56, Thomas Gleixner wrote:
>> On Wed, Jan 28 2026 at 18:28, Shrikanth Hegde wrote:
>>> On 1/28/26 5:27 PM, Thomas Gleixner wrote:
>>> watchdog: CPU 23 self-detected hard LOCKUP @ mm_get_cid+0xe8/0x188
>>> watchdog: CPU 23 TB:1434903268401795, last heartbeat TB:1434897252302837 (11750ms ago)
>>> NIP [c0000000001b7134] mm_get_cid+0xe8/0x188
>>> LR [c0000000001b7154] mm_get_cid+0x108/0x188
>>> Call Trace:
>>> [c000000004c37db0] [c000000001145d84] cpuidle_enter_state+0xf8/0x6a4 (unreliable)
>>> [c000000004c37e00] [c0000000001b95ac] mm_cid_switch_to+0x3c4/0x52c
>>> [c000000004c37e60] [c000000001147264] __schedule+0x47c/0x700
>>
>> So if the above spins in mm_get_cid() then the below is just a consequence.
>>
>>> watchdog: CPU 11 self-detected hard LOCKUP @ plpar_hcall_norets_notrace+0x18/0x2c
>>> watchdog: CPU 11 TB:1434903340004919, last heartbeat TB:1434897249749892 (11895ms ago)
>>> NIP [c0000000000f84fc] plpar_hcall_norets_notrace+0x18/0x2c
>>> LR [c000000001152588] queued_spin_lock_slowpath+0xd88/0x15d0
>>> Call Trace:
>>> [c00000056b69fb10] [c00000056b69fba0] 0xc00000056b69fba0 (unreliable)
>>> [c00000056b69fc30] [c000000001153ce0] _raw_spin_lock+0x80/0xa0
>>> [c00000056b69fc50] [c0000000001b9a34] raw_spin_rq_lock_nested+0x3c/0xf8
>>> [c00000056b69fc80] [c0000000001b9bb8] mm_cid_fixup_cpus_to_tasks+0xc8/0x28c
>>> [c00000056b69fd00] [c0000000001bff34] sched_mm_cid_exit+0x108/0x22c
>>> [c00000056b69fd40] [c000000000167b08] do_exit+0xf4/0x5d0
>>> [c00000056b69fdf0] [c00000000016800c] make_task_dead+0x0/0x178
>>> [c00000056b69fe10] [c0000000000316c8] system_call_exception+0x128/0x390
>>> [c00000056b69fe50] [c00000000000cedc] system_call_vectored_common+0x15c/0x2ec
>>
>>> I am wondering if it this loop in mm_get_cid, which may not be getting a cid
>>> for a long time? Is that possible?
>>
>> It shouldn't be possible by design, but it seems there is a corner case
>> lurking somewhere which hasn't been covered. Let me stare at the logic
>> in the transition functions once more. That's where CPU11 comes from:
>>
>>> [c00000056b69fc80] [c0000000001b9bb8] mm_cid_fixup_cpus_to_tasks+0xc8/0x28c
>>
>> The exiting it initiated a transition back from per CPU to per task mode
>> and that seems to make things unhappy for mysterious reasons.
>
> I stared at it for a while and found the below stupidity. But when I
> actually sat down after a while away from the keyboard and tried to
> write a concise changelog explaining the root cause I failed to come up
> with a coherent explanation why this would prevent the above scenario,
> which hints at a situation of MMCID exhaustion.
>
> @Ihor: Is the BPF CI fallout reproducible? If so, can you please provide
> it?
Not reliably, unfortunately. I saw it at least twice (out of 100+
runs) this week.
I added `hardlockup_all_cpu_backtrace=1` to get more logs. If there
is anything else I could set up (kconfigs, debug switches) that may be
helpful, let me know.
We have a steady stream of jobs running, so if it's not a one-off it's
likely to happen again. I'll share if we get anything.
Thank you for investigating!
>
> Thanks,
>
> tglx
> ---
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -10664,8 +10664,14 @@ void sched_mm_cid_exit(struct task_struc
> scoped_guard(raw_spinlock_irq, &mm->mm_cid.lock) {
> if (!__sched_mm_cid_exit(t))
> return;
> - /* Mode change required. Transfer currents CID */
> - mm_cid_transit_to_task(current, this_cpu_ptr(mm->mm_cid.pcpu));
> + /*
> + * Mode change. The task has the CID unset
> + * already. The CPU CID is still valid and
> + * does not have MM_CID_TRANSIT set as the
> + * mode change has just taken effect under
> + * mm::mm_cid::lock. Drop it.
> + */
> + mm_drop_cid_on_cpu(mm, this_cpu_ptr(mm->mm_cid.pcpu));
> }
> mm_cid_fixup_cpus_to_tasks(mm);
> return;
Powered by blists - more mailing lists