[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250904202858.GN4068168@noisy.programming.kicks-ass.net>
Date: Thu, 4 Sep 2025 22:28:58 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Tejun Heo <tj@...nel.org>
Cc: Andrea Righi <arighi@...dia.com>, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>,
Joel Fernandes <joelagnelf@...dia.com>,
David Vernet <void@...ifault.com>,
Changwoo Min <changwoo@...lia.com>, Shuah Khan <shuah@...nel.org>,
sched-ext@...ts.linux.dev, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org,
Luigi De Matteis <ldematteis123@...il.com>
Subject: Re: [PATCH 07/16] sched_ext: Add a DL server for sched_ext tasks
On Wed, Sep 03, 2025 at 10:56:46PM +0200, Peter Zijlstra wrote:
> On Wed, Sep 03, 2025 at 10:41:21AM -1000, Tejun Heo wrote:
> > Hello,
> >
> > On Wed, Sep 03, 2025 at 10:08:22PM +0200, Peter Zijlstra wrote:
> > > > I'm a bit confused. This series doesn't have prep patches to add @rf to
> > > > dl_server_pick_f. Is this the right patch?
> > >
> > > Patch 14 seems to be the proposed alternative, and I'm not liking that
> > > at all.
> > >
> > > That rf passing was very much also needed for that other issue; I'm not
> > > sure why that's gone away.
> >
> > Using balance() was my suggestion to stay within the current framework. If
> > we want to add @rf to pick_task(), that's more fundamental change. We
> > dropped the discussion in the other thread but I found it odd to add @rf to
> > pick_task() while disallowing the use of @rf in non-dl-server pick path and
> > if we want to allow that, we gotta solve the race between pick_task()
> > dropping rq lock and the ttwu inserting high pri task.
>
> I thought the idea was to add rf unconditionally, dl-server or not, it
> is needed in both cases.
>
> Yes, that race needs dealing with. We have this existing pattern that
> checks if a higher class has runnable tasks and restarting the pick.
> This is currently only done for pick_next_task_fair() but that can
> easily be extended.
>
> You suggested maybe moving this to the ttwu side -- but up to this point
> I thought we were in agreement. I'm not sure moving it to the ttwu side
> makes things better; it would need ttwu to know a pick is in progress
> and for which class. The existing restart pick is simpler, I think.
>
> Yes, the restart is somewhat more complicated if we want to deal with
> the dl-server, but not terribly so. It could just store a snapshot of
> rq->dl.dl_nr_running from before the pick and only restart if that went
> up.
Stepping back one step; per here:
https://lore.kernel.org/all/20250819100838.GH3245006@noisy.programming.kicks-ass.net/T/#mf8f95d1c2637a2ac9d9ec8f71fffe064a5718fff
the reason for dropping rq->lock is having to migrate a task from the
global dispatch queue.
Now, the current rules for migrating tasks are:
WAKEUP:
hold p->pi_lock, this serializes against ttwu() and if found blocked
after taking the lock, you're sure it will stay blocked and you can
call set_task_cpu(), then you can lock the target rq, enqueue the
thing and call it a day.
RUNNABLE:
1) hold both source and target rq->lock.
2) hold source rq->lock, set p->on_rq = TASK_ON_RQ_MIGRATING, dequeue, call
set_task_cpu(), drop source rq->lock, take target rq->lock, enqueue,
set p->on_rq = TASK_ON_RQ_QUEUED, drop target rq->lock.
set_task_cpu() has a pile of assertions trying to make sure these rules
are followed.
Of concern here is the RUNNABLE thing -- if you want to strictly follow
those rules, you're going to have to drop rq->lock in order to acquire
the source rq->lock and all that.
However, the actual reason we need to acquire the source rq->lock, is
because that lock protects the data structures the task is on. Without
taking the source rq->lock you're not protected from concurrent use, it
could get scheduled in, or migrated elsewhere at the same time --
obviously bad things.
Now, assuming you have a locking order like:
p->pi_lock
rq->lock
dsq->lock
When you do something like:
__schedule()
raw_spin_lock(rq->lock);
next = pick_next_task() -> pick_task_scx()
raw_spin_lock(dsq->lock);
Then you are, in effect, in the RUNNABLE 1) case above. You hold both
locks. Nothing is going to move your task around while you hold that
dsq->lock. That task is on the dsq, anybody else wanting to also do
anything with that task, will have to first take dsq->lock.
Therefore, at this point, it is perfectly fine to do:
set_task_cpu(cpu_of(rq)); // move task here
There is no actual concurrency. The only thing there is is
set_task_cpu() complaining you're not following the rules -- but you
are, it just doesn't know -- and we can fix that.
Or am I still missing something?
Powered by blists - more mailing lists