[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230721015704.GA212678@ziqianlu-dell>
Date: Fri, 21 Jul 2023 09:57:04 +0800
From: Aaron Lu <aaron.lu@...el.com>
To: Vincent Guittot <vincent.guittot@...aro.org>
CC: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
"Mel Gorman" <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
Tim Chen <tim.c.chen@...el.com>,
Nitin Tekchandani <nitin.tekchandani@...el.com>,
Yu Chen <yu.c.chen@...el.com>,
Waiman Long <longman@...hat.com>,
<linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 3/4] sched/fair: delay update_tg_load_avg() for
cfs_rq's removed load
On Thu, Jul 20, 2023 at 05:02:32PM +0200, Vincent Guittot wrote:
>
> What was wrong with your proposal to limit the update inside
> update_tg_load_avg() ? except maybe s/1000000/NSEC_PER_MSEC/ and
> computing delta after testing the time since last update
I was thinking it might be better to align the update_tg_load_avg() with
cfs_rq's decay point but that's just my feeling.
Absolutely nothing wrong with the below approach, it's straightforward
and effective. I'll fix the use of cfs_rq_clock_pelt() and collect
some data and then send out v2.
Thank you Vincent for all your comments, they're very useful to me.
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index a80a73909dc2..e48fd0e6982d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3665,6 +3665,7 @@ static inline bool cfs_rq_is_decayed(struct
> cfs_rq *cfs_rq)
> static inline void update_tg_load_avg(struct cfs_rq *cfs_rq)
> {
> long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
> + u64 now = cfs_rq_clock_pelt(cfs_rq);
>
> /*
> * No need to update load_avg for root_task_group as it is not used.
> @@ -3672,9 +3673,11 @@ static inline void update_tg_load_avg(struct
> cfs_rq *cfs_rq)
> if (cfs_rq->tg == &root_task_group)
> return;
>
> - if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64) {
> + if ((now - cfs_rq->last_update_tg_load_avg > 1000000) &&
> + abs(delta) > cfs_rq->tg_load_avg_contrib / 64) {
> atomic_long_add(delta, &cfs_rq->tg->load_avg);
> cfs_rq->tg_load_avg_contrib = cfs_rq->avg.load_avg;
> + cfs_rq->last_update_tg_load_avg = now;
> }
> }
Powered by blists - more mailing lists