[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3e88f113-e7b9-9b26-a4c1-52cf92b820c5@amd.com>
Date: Mon, 22 Jul 2024 10:47:40 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Chuyi Zhou <zhouchuyi@...edance.com>
CC: <mingo@...hat.com>, <peterz@...radead.org>, <juri.lelli@...hat.com>,
<vincent.guittot@...aro.org>, <dietmar.eggemann@....com>,
<rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>,
<vschneid@...hat.com>, <chengming.zhou@...ux.dev>,
<linux-kernel@...r.kernel.org>, Qais Yousef <qyousef@...alina.io>
Subject: Re: [PATCH v2] sched/fair: Sync se's load_avg with cfs_rq in
reweight_task
(+ Qais)
Hello Chuyi,
On 7/20/2024 10:42 AM, Chuyi Zhou wrote:
> In reweight_task(), there are two situations:
>
> 1. The task was on_rq, then the task's load_avg is accurate because we
> synchronized it with cfs_rq through update_load_avg() in dequeue_task().
>
> 2. The task is sleeping, its load_avg might not have been updated for some
> time, which can result in inaccurate dequeue_load_avg() in
> reweight_entity().
>
> This patch solves this by using update_load_avg() to synchronize the
> load_avg of se with cfs_rq. For tasks were on_rq, since we already update
> load_avg to accurate values in dequeue_task(), this change will not have
> other effects due to the short time interval between the two updates.
>
> Signed-off-by: Chuyi Zhou <zhouchuyi@...edance.com>
> ---
> Changes in v2:
> - change the description in commit log.
> - use update_load_avg() in reweight_task() rather than in reweight_entity
> suggested by chengming.
> - Link to v1: https://lore.kernel.org/lkml/20240716150840.23061-1-zhouchuyi@bytedance.com/
> ---
> kernel/sched/fair.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 9057584ec06d..b1e07ce90284 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3835,12 +3835,15 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
> }
> }
>
> +static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags);
> +
> void reweight_task(struct task_struct *p, const struct load_weight *lw)
> {
> struct sched_entity *se = &p->se;
> struct cfs_rq *cfs_rq = cfs_rq_of(se);
> struct load_weight *load = &se->load;
>
> + update_load_avg(cfs_rq, se, 0);
Seems to be necessary when we reach here from __setscheduler_params() or
set_user_nice() for a sleeping task. Please feel free to add:
Reviewed-by: K Prateek Nayak <kprateek.nayak@....com>
But since we are on the subject of accurate PELT accounting, one question
I have here is whether a reweight_task() for a sleeping task race with
its wakeup? Something like the following scenario:
CPU0 CPU1
==== ====
/* No rq locks held until ttwu_queue() */
try_to_wake_up(p) {
...
/* Migrating task */
set_task_cpu(p, cpu) {
/* p->sched_class->migrate_task_rq(p, new_cpu); */ /* Called with task_cpu(p)'s rq lock held */
migrate_task_rq_fair() { reweight_task(p) {
/* p is still sleeping */ ...
if (!task_on_rq_migrating(p)) { dequeue_load_avg(cfs_rq, se);
remove_entity_load_avg(se); update_load_set(&se->load, weight);
... enqueue_load_avg(cfs_rq, se);
} ...
} }
/* task_cpu() is updated here */
__set_task_cpu(p, new_cpu);
}
ttwu_queue();
}
In theory, the remove_entity_load_avg() could record stale value of
"load_avg" that gets removed at the next dequeue if I'm not mistaken?
But I believe these small inaccuracies are tolerable since they'll decay
in a while anyways?
> reweight_entity(cfs_rq, se, lw->weight);
> load->inv_weight = lw->inv_weight;
> }
--
Thanks and Regards,
Prateek
Powered by blists - more mailing lists