[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250523115921.GB1240558@bytedance>
Date: Fri, 23 May 2025 19:59:56 +0800
From: Aaron Lu <ziqianlu@...edance.com>
To: Chengming Zhou <chengming.zhou@...ux.dev>
Cc: Valentin Schneider <vschneid@...hat.com>,
Ben Segall <bsegall@...gle.com>,
K Prateek Nayak <kprateek.nayak@....com>,
Peter Zijlstra <peterz@...radead.org>,
Josh Don <joshdon@...gle.com>, Ingo Molnar <mingo@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Xi Wang <xii@...gle.com>, linux-kernel@...r.kernel.org,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Mel Gorman <mgorman@...e.de>,
Chuyi Zhou <zhouchuyi@...edance.com>,
Jan Kiszka <jan.kiszka@...mens.com>,
Florian Bezdeka <florian.bezdeka@...mens.com>
Subject: Re: [PATCH 4/7] sched/fair: Take care of group/affinity/sched_class
change for throttled task
On Fri, May 23, 2025 at 05:53:55PM +0800, Chengming Zhou wrote:
> On 2025/5/23 17:42, Aaron Lu wrote:
> > On Fri, May 23, 2025 at 05:13:35PM +0800, Chengming Zhou wrote:
> > > On 2025/5/23 15:56, Aaron Lu wrote:
> > > > On Fri, May 23, 2025 at 10:43:53AM +0800, Chengming Zhou wrote:
> > > > > On 2025/5/20 18:41, Aaron Lu wrote:
> > > > > > On task group change, for tasks whose on_rq equals to TASK_ON_RQ_QUEUED,
> > > > > > core will dequeue it and then requeued it.
> > > > > >
> > > > > > The throttled task is still considered as queued by core because p->on_rq
> > > > > > is still set so core will dequeue it, but since the task is already
> > > > > > dequeued on throttle in fair, handle this case properly.
> > > > > >
> > > > > > Affinity and sched class change is similar.
> > > > >
> > > > > How about setting p->on_rq to 0 when throttled? which is the fact that
> > > > > the task is not on cfs queue anymore, does this method cause any problem?
> > > > >
> > > >
> > > > On task group change/affinity change etc. if the throttled task is
> > > > regarded as !on_rq, then it will miss the chance to be enqueued to the
> > > > new(and correct) cfs_rqs, instead, it will be enqueued back to its
> > > > original cfs_rq on unthrottle which breaks affinity or task group
> > >
> > > Yeah, this is indeed a problem, I was thinking to delete the throttled task
> > > from the cfs_rq limbo list, then add it to another cfs_rq limbo list or cfs_rq
> > > runnable tree based on the new cfs_rq's throttle status.
> >
> > Only work when the task is still handled by fair :)
> >
> > >
> > > But it's much complex compared with your current method.
> > >
> > > > settings. We may be able to do something in tg_unthrottle_up() to take
> > > > special care of these situations, but it seems a lot of headaches.
> > > >
> > > > Also, for task group change, if the new task group does not have throttle
> > > > setting, that throttled task should be allowed to run immediately instead
> > > > of waiting for its old cfs_rq's unthrottle event. Similar is true when
> > > > this throttled task changed its sched class, like from fair to rt.
> > > >
> > > > Makes sense?
> > >
> > > Ok, the another problem of the current method I can think of is the PELT maintenance,
> > > we skip the actual dequeue_task_fair() process, which includes PELT detach, we just
> > > delete it from the cfs_rq limbo list, so it can result in PELT maintenance error.
> > >
> >
> > There are corresponding callbacks that handle this, e.g. for task group
> > change, there is task_change_group_fair() that handles PELT detach; for
> > affinity change, there is migrate_task_rq_fair() does that and for sched
> > class change, there is switched_from/to() does that.
> >
> > Or do I miss anything?
>
> migrate_task_rq_fair() only do it when !task_on_rq_migrating(p), which is wakeup migrate,
> because we already do detach in dequeue_task_fair() for on_rq task migration...
> You can see the DO_DETACH flag in update_load_avg() called from dequeue_entity().
>
Understood, thanks for catching this!
So the code was initially developed on top of v5.15 and there is a
detach in migrate_task_rq_fair():
if (p->on_rq == TASK_ON_RQ_MIGRATING) {
/*
* In case of TASK_ON_RQ_MIGRATING we in fact hold the 'old'
* rq->lock and can modify state directly.
*/
lockdep_assert_rq_held(task_rq(p));
detach_entity_cfs_rq(&p->se);
}
But looks like it's gone now by commit e1f078f50478("sched/fair: Combine
detach into dequeue when migrating task") and I failed to notice this
detail...
Anyway, the task is already dequeued without TASK_ON_RQ_MIGRATING being
set when throttled and it can't be dequeued again, so perhaps something
like below could cure this situation?(just to illustrate the idea, not
even compile tested)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 89afa472299b7..dc2e9a6bf3fd7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5868,6 +5868,9 @@ static void dequeue_throttled_task(struct task_struct *p, int flags)
WARN_ON_ONCE(flags & DEQUEUE_SLEEP);
list_del_init(&p->throttle_node);
+
+ if (task_on_rq_migrating(p))
+ detach_task_cfs_rq(p);
}
Powered by blists - more mailing lists