[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260129211557.678686545@kernel.org>
Date: Thu, 29 Jan 2026 22:20:48 +0100
From: Thomas Gleixner <tglx@...nel.org>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Ihor Solodrai <ihor.solodrai@...ux.dev>,
Shrikanth Hegde <sshegde@...ux.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Michael Jeanson <mjeanson@...icios.com>
Subject: [patch 1/4] sched/mmcid: Prevent live lock on task to CPU mode
transition
Ihor reported a BPF CI failure which turned out to be a live lock in the
MM_CID management. The scenario is:
A test program creates the 4th child, which means the MM_CID users become
more than the number of CPUs (four in this example), so it switches to per
CPU ownership mode.
At this point each live task of the program has a CID associated. Assume
thread creation order assignment for simplicity.
T0 (main thread) CID0 runs fork() and creates T4
T1 (1st child) CID1
T2 (2nd child) CID2
T3 (3rd child) CID3
T4 (4th child) --- not visible yet
T0 sets mm_cid::percpu = true and transfers it's own CID to CPU0 where it
runs on and then starts the fixup which walks through the threads to
transfer the per task CIDs either to the CPU the task is running on or drop
it back into the pool if the task is not on a CPU.
During that T1 - T3 are free to schedule in and out before the fixup caught
up with them. Going through all possible permutations with a python script
revealed a few problematic cases. The most trivial one is:
T1 schedules in on CPU1 and observes percpu == true, so it transfers
it's CID to CPU1
T1 is migrated to CPU1 and schedule in observes percpu == true, but
CPU2 does not have a CID associated and T1 transferred it's own to
CPU1
So it has to allocate one with CPU2 runqueue lock held, but the
pool is empty, so it keeps looping in mm_get_cid().
Now T0 reaches T1 in the thread walk and tries to lock the corresponding
runqueue lock, which is held causing a full live lock.
There is a similar scenario in the reverse direction of switching from per
CPU to task mode which is way more obvious and got therefore addressed by
an intermediate mode. In this mode the CIDs are marked with MM_CID_TRANSIT,
which means that they are neither owned by the CPU nor by the task. When a
task schedules out with a transit CID it drops the CID back into the pool
making it available for others to use temporarily. Once the task which
initiated the mode switch finished the fixup it clears the transit mode and
the process goes back into per task ownership mode.
Unfortunately this insight was not mapped back to the task to CPU mode
switch as the above described scenario was not considered in the analysis.
Apply the same transit mechanism to the task to CPU mode switch to handle
these problematic cases correctly.
As with the CPU to task transition this results in a potential temporary
contention on the CID bitmap, but that's only for the time it takes to
complete the transition. After that it stays in steady mode which does not
touch the bitmap at all.
Fixes: fbd0e71dc370 ("sched/mmcid: Provide CID ownership mode fixup functions")
Reported-by: Ihor Solodrai <ihor.solodrai@...ux.dev>
Signed-off-by: Thomas Gleixner <tglx@...nel.org>
Closes: https://lore.kernel.org/2b7463d7-0f58-4e34-9775-6e2115cfb971@linux.dev
---
kernel/sched/core.c | 118 ++++++++++++++++++++++++++++++++-------------------
kernel/sched/sched.h | 4 +
2 files changed, 80 insertions(+), 42 deletions(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -10269,7 +10269,8 @@ void call_trace_sched_update_nr_running(
* Serialization rules:
*
* mm::mm_cid::mutex: Serializes fork() and exit() and therefore
- * protects mm::mm_cid::users.
+ * protects mm::mm_cid::users and mode switch
+ * transitions
*
* mm::mm_cid::lock: Serializes mm_update_max_cids() and
* mm_update_cpus_allowed(). Nests in mm_cid::mutex
@@ -10285,14 +10286,61 @@ void call_trace_sched_update_nr_running(
*
* A CID is either owned by a task (stored in task_struct::mm_cid.cid) or
* by a CPU (stored in mm::mm_cid.pcpu::cid). CIDs owned by CPUs have the
- * MM_CID_ONCPU bit set. During transition from CPU to task ownership mode,
- * MM_CID_TRANSIT is set on the per task CIDs. When this bit is set the
- * task needs to drop the CID into the pool when scheduling out. Both bits
- * (ONCPU and TRANSIT) are filtered out by task_cid() when the CID is
- * actually handed over to user space in the RSEQ memory.
+ * MM_CID_ONCPU bit set.
+ *
+ * During the transition of ownership mode, the MM_CID_TRANSIT bit is set
+ * on the CIDs. When this bit is set the tasks drop the CID back into the
+ * pool when scheduling out.
+ *
+ * Both bits (ONCPU and TRANSIT) are filtered out by task_cid() when the
+ * CID is actually handed over to user space in the RSEQ memory.
*
* Mode switching:
*
+ * All transitions of ownership mode happen in two phases:
+ *
+ * 1) mm:mm_cid.transit contains MM_CID_TRANSIT. This is OR'ed on the CIDs
+ * and denotes that the CID is only temporarily owned by a task. When
+ * the task schedules out it drops the CID back into the pool if this
+ * bit is set.
+ *
+ * 2) The initiating context walks the per CPU space or the tasks to fixup
+ * or drop the CIDs and after completion it clears mm:mm_cid.transit.
+ * After that point the CIDs are strictly task or CPU owned again.
+ *
+ * This two phase transition is required to prevent CID space exhaustion
+ * during the transition as a direct transfer of ownership would fail:
+ *
+ * - On task to CPU mode switch if a task is scheduled in on one CPU and
+ * then migrated to another CPU before the fixup freed enough per task
+ * CIDs.
+ *
+ * - On CPU to task mode switch if two tasks are scheduled in on the same
+ * CPU before the fixup freed per CPU CIDs.
+ *
+ * Both scenarios can result in a live lock because sched_in() is invoked
+ * with runqueue lock held and loops in search of a CID and the fixup
+ * thread can't make progress freeing them up because it is stuck on the
+ * same runqueue lock.
+ *
+ * While MM_CID_TRANSIT is active during the transition phase the MM_CID
+ * bitmap can be contended, but that's a temporary contention bound to the
+ * transition period. After that everything goes back into steady state and
+ * nothing except fork() and exit() will touch the bitmap. This is an
+ * acceptable tradeoff as it completely avoids complex serialization,
+ * memory barriers and atomic operations for the common case.
+ *
+ * Aside of that this mechanism also ensures RT compability:
+ *
+ * - The task which runs the fixup is fully preemptible except for the
+ * short runqueue lock held sections.
+ *
+ * - The transient impact of the bitmap contention is only problematic
+ * when there is a thundering herd scenario of tasks scheduling in and
+ * out concurrently. There is not much which can be done about that
+ * except for avoiding mode switching by a proper overall system
+ * configuration.
+ *
* Switching to per CPU mode happens when the user count becomes greater
* than the maximum number of CIDs, which is calculated by:
*
@@ -10306,12 +10354,13 @@ void call_trace_sched_update_nr_running(
*
* At the point of switching to per CPU mode the new user is not yet
* visible in the system, so the task which initiated the fork() runs the
- * fixup function: mm_cid_fixup_tasks_to_cpu() walks the thread list and
- * either transfers each tasks owned CID to the CPU the task runs on or
- * drops it into the CID pool if a task is not on a CPU at that point in
- * time. Tasks which schedule in before the task walk reaches them do the
- * handover in mm_cid_schedin(). When mm_cid_fixup_tasks_to_cpus() completes
- * it's guaranteed that no task related to that MM owns a CID anymore.
+ * fixup function. mm_cid_fixup_tasks_to_cpu() walks the thread list and
+ * either marks each task owned CID with MM_CID_TRANSIT if the task is
+ * running on a CPU or drops it into the CID pool if a task is not on a
+ * CPU. Tasks which schedule in before the task walk reaches them do the
+ * handover in mm_cid_schedin(). When mm_cid_fixup_tasks_to_cpus()
+ * completes it is guaranteed that no task related to that MM owns a CID
+ * anymore.
*
* Switching back to task mode happens when the user count goes below the
* threshold which was recorded on the per CPU mode switch:
@@ -10327,28 +10376,11 @@ void call_trace_sched_update_nr_running(
* run either in the deferred update function in context of a workqueue or
* by a task which forks a new one or by a task which exits. Whatever
* happens first. mm_cid_fixup_cpus_to_task() walks through the possible
- * CPUs and either transfers the CPU owned CIDs to a related task which
- * runs on the CPU or drops it into the pool. Tasks which schedule in on a
- * CPU which the walk did not cover yet do the handover themself.
- *
- * This transition from CPU to per task ownership happens in two phases:
- *
- * 1) mm:mm_cid.transit contains MM_CID_TRANSIT This is OR'ed on the task
- * CID and denotes that the CID is only temporarily owned by the
- * task. When it schedules out the task drops the CID back into the
- * pool if this bit is set.
- *
- * 2) The initiating context walks the per CPU space and after completion
- * clears mm:mm_cid.transit. So after that point the CIDs are strictly
- * task owned again.
- *
- * This two phase transition is required to prevent CID space exhaustion
- * during the transition as a direct transfer of ownership would fail if
- * two tasks are scheduled in on the same CPU before the fixup freed per
- * CPU CIDs.
- *
- * When mm_cid_fixup_cpus_to_tasks() completes it's guaranteed that no CID
- * related to that MM is owned by a CPU anymore.
+ * CPUs and either marks the CPU owned CIDs with MM_CID_TRANSIT if a
+ * related task is running on the CPU or drops it into the pool. Tasks
+ * which are scheduled in before the fixup covered them do the handover
+ * themself. When mm_cid_fixup_cpus_to_tasks() completes it is guaranteed
+ * that no CID related to that MM is owned by a CPU anymore.
*/
/*
@@ -10400,9 +10432,9 @@ static bool mm_update_max_cids(struct mm
/* Mode change required? */
if (!!mc->percpu == !!mc->pcpu_thrs)
return false;
- /* When switching back to per TASK mode, set the transition flag */
- if (!mc->pcpu_thrs)
- WRITE_ONCE(mc->transit, MM_CID_TRANSIT);
+
+ /* Set the transition flag to bridge the transfer */
+ WRITE_ONCE(mc->transit, MM_CID_TRANSIT);
WRITE_ONCE(mc->percpu, !!mc->pcpu_thrs);
return true;
}
@@ -10493,10 +10525,10 @@ static void mm_cid_fixup_cpus_to_tasks(s
WRITE_ONCE(mm->mm_cid.transit, 0);
}
-static inline void mm_cid_transfer_to_cpu(struct task_struct *t, struct mm_cid_pcpu *pcp)
+static inline void mm_cid_transit_to_cpu(struct task_struct *t, struct mm_cid_pcpu *pcp)
{
if (cid_on_task(t->mm_cid.cid)) {
- t->mm_cid.cid = cid_to_cpu_cid(t->mm_cid.cid);
+ t->mm_cid.cid = cid_to_transit_cid(t->mm_cid.cid);
pcp->cid = t->mm_cid.cid;
}
}
@@ -10509,9 +10541,9 @@ static bool mm_cid_fixup_task_to_cpu(str
if (!t->mm_cid.active)
return false;
if (cid_on_task(t->mm_cid.cid)) {
- /* If running on the CPU, transfer the CID, otherwise drop it */
+ /* If running on the CPU, put the CID in transit mode, otherwise drop it */
if (task_rq(t)->curr == t)
- mm_cid_transfer_to_cpu(t, per_cpu_ptr(mm->mm_cid.pcpu, task_cpu(t)));
+ mm_cid_transit_to_cpu(t, per_cpu_ptr(mm->mm_cid.pcpu, task_cpu(t)));
else
mm_unset_cid_on_task(t);
}
@@ -10596,11 +10628,13 @@ void sched_mm_cid_fork(struct task_struc
if (!percpu)
mm_cid_transit_to_task(current, pcp);
else
- mm_cid_transfer_to_cpu(current, pcp);
+ mm_cid_transit_to_cpu(current, pcp);
}
if (percpu) {
mm_cid_fixup_tasks_to_cpus();
+ /* Clear the transition bit */
+ WRITE_ONCE(mm->mm_cid.transit, 0);
} else {
mm_cid_fixup_cpus_to_tasks(mm);
t->mm_cid.cid = mm_get_cid(mm);
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3841,6 +3841,10 @@ static __always_inline void mm_cid_from_
/* Still nothing, allocate a new one */
if (!cid_on_cpu(cpu_cid))
cpu_cid = cid_to_cpu_cid(mm_get_cid(mm));
+
+ /* Set the transition mode flag if required */
+ if (READ_ONCE(mm->mm_cid.transit))
+ cpu_cid = cpu_cid_to_cid(cpu_cid) | MM_CID_TRANSIT;
}
mm_cid_update_pcpu_cid(mm, cpu_cid);
mm_cid_update_task_cid(t, cpu_cid);
Powered by blists - more mailing lists