[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <617e1803-373c-486f-8eba-e54cc893b7f2@efficios.com>
Date: Fri, 30 Jan 2026 11:29:36 -0500
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Thomas Gleixner <tglx@...nel.org>, LKML <linux-kernel@...r.kernel.org>
Cc: Ihor Solodrai <ihor.solodrai@...ux.dev>,
Shrikanth Hegde <sshegde@...ux.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Michael Jeanson <mjeanson@...icios.com>
Subject: Re: [patch 4/4] sched/mmcid: Optimize transitional CIDs when
scheduling out
On 2026-01-30 11:13, Thomas Gleixner wrote:
> On Fri, Jan 30 2026 at 10:50, Mathieu Desnoyers wrote:
>> On 2026-01-29 16:20, Thomas Gleixner wrote:
>>> During the investigation of the various transition mode issues
>>> instrumentation revealed that the amount of bitmap operations can be
>>> significantly reduced when a task with a transitional CID schedules out
>>> after the fixup function completed and disabled the transition mode.
>>>
>>> At that point the mode is stable and therefore it is not required to drop
>>> the transitional CID back into the pool. As the fixup is complete the
>>> potential exhaustion of the CID pool is not longer possible, so the CID can
>>> be transferred to the scheduling out task or to the CPU depending on the
>>> current ownership mode. This is now possible because mm_cid::mode contains
>>> both the ownership state and the transition bit so the racy snapshot is
>>> valid under all circumstances because a subsequent modification of the
>>> mode is serialized by the corresponding runqueue lock.
>>
>> AFAIU the mc->mode updates are serialized by the mm->mm_cid.lock
>> and not the runqueue locks. What am I missing ?
>
> Actually the mode updates are serialized by the mutex. They happen under
> the lock as well, but the lock is not a serialization requirement for
> mode changes.
Right, I meant the mutex but got mixed up with the raw spinlock.
>
> What I meant to write with tired brain is:
>
> The racy snapshot is valid under runqueue lock even when there is a
> concurrent mode update going on because the subsequent fixup function
> is serialized with runqueue lock. That means in the following
> scenario:
>
> CPU0 CPU1
> clear TRANSIT
> ....
> lock(rq)
> sched_out()
> CID has TRANSIT set
> ...
> // observes TRANSIT=0
> localmode = READ_ONCE(...mode);
> // sets TRANSIT
> switch mode
> transfer CID according to localmode
> fixup()
> lock(rq) <- Blocked until the schedule on CPU1 is complete
>
> So both sched_out() and fixup() observe consistent state and everything
> just works.
There is still one detail I'm concerned about here.
I would be tempted to add explicit memory barriers between:
store to mm->mm_cid.mode (set TRANSIT)
smp_mb(); /* Order store to mode before rq locks */
mm_cid_fixup_cpus_to_tasks() / mm_cid_fixup_tasks_to_cpus()
smp_mb(); /* Order rq unlocks before store to mode. */
store to mm->mm_cid.mode (clear TRANSIT)
because AFAIU the rq locks taken within the fixups are the
only serialization between the scheduler and the fixup, but
the mode stores performed by the mode transition are done
outside of the rq locks, which means those can be reordered
within the fixup rq lock critical sections. Locks are
semi-permeable barriers only, unless there is something
special about the rq lock ?
AFAIU, having the transit state cleared while performing the
fixup is a state we don't want.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
Powered by blists - more mailing lists