[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240624123529.GM31592@noisy.programming.kicks-ass.net>
Date: Mon, 24 Jun 2024 14:35:29 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Tejun Heo <tj@...nel.org>
Cc: torvalds@...ux-foundation.org, mingo@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com, ast@...nel.org,
daniel@...earbox.net, andrii@...nel.org, martin.lau@...nel.org,
joshdon@...gle.com, brho@...gle.com, pjt@...gle.com,
derkling@...gle.com, haoluo@...gle.com, dvernet@...a.com,
dschatzberg@...a.com, dskarlat@...cmu.edu, riel@...riel.com,
changwoo@...lia.com, himadrics@...ia.fr, memxor@...il.com,
andrea.righi@...onical.com, joel@...lfernandes.org,
linux-kernel@...r.kernel.org, bpf@...r.kernel.org,
kernel-team@...a.com
Subject: Re: [PATCH 10/39] sched: Factor out update_other_load_avgs() from
__update_blocked_others()
On Wed, May 01, 2024 at 05:09:45AM -1000, Tejun Heo wrote:
> RT, DL, thermal and irq load and utilization metrics need to be decayed and
> updated periodically and before consumption to keep the numbers reasonable.
> This is currently done from __update_blocked_others() as a part of the fair
> class load balance path. Let's factor it out to update_other_load_avgs().
> Pure refactor. No functional changes.
>
> This will be used by the new BPF extensible scheduling class to ensure that
> the above metrics are properly maintained.
>
> Signed-off-by: Tejun Heo <tj@...nel.org>
> Reviewed-by: David Vernet <dvernet@...a.com>
> ---
> kernel/sched/core.c | 19 +++++++++++++++++++
> kernel/sched/fair.c | 16 +++-------------
> kernel/sched/sched.h | 3 +++
> 3 files changed, 25 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 90b505fbb488..7542a39f1fde 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7486,6 +7486,25 @@ int sched_core_idle_cpu(int cpu)
> #endif
>
> #ifdef CONFIG_SMP
> +/*
> + * Load avg and utiliztion metrics need to be updated periodically and before
> + * consumption. This function updates the metrics for all subsystems except for
> + * the fair class. @rq must be locked and have its clock updated.
> + */
> +bool update_other_load_avgs(struct rq *rq)
> +{
> + u64 now = rq_clock_pelt(rq);
> + const struct sched_class *curr_class = rq->curr->sched_class;
> + unsigned long thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq));
> +
> + lockdep_assert_rq_held(rq);
> +
> + return update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) |
> + update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) |
> + update_thermal_load_avg(rq_clock_thermal(rq), rq, thermal_pressure) |
> + update_irq_load_avg(rq, 0);
> +}
Yeah, but you then ignore the return value and don't call into cpufreq.
Vincent, what would be the right thing to do here?
Powered by blists - more mailing lists