[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5e97cfda-48e1-4a7d-ba66-33751463e98d@arm.com>
Date: Wed, 24 Jul 2024 13:34:31 +0200
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Chen Yu <yu.c.chen@...el.com>,
Vincent Guittot <vincent.guittot@...aro.org>
Cc: kernel test robot <oliver.sang@...el.com>, oe-lkp@...ts.linux.dev,
lkp@...el.com, linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...nel.org>,
Lukasz Luba <lukasz.luba@....com>, Qais Yousef <qyousef@...alina.io>
Subject: Re: [linus:master] [sched/pelt] 97450eb909:
INFO:task_blocked_for_more_than#seconds
On 12/07/2024 18:41, Chen Yu wrote:
> On 2024-07-09 at 12:03:42 +0200, Vincent Guittot wrote:
>> On Tue, 9 Jul 2024 at 09:22, kernel test robot <oliver.sang@...el.com> wrote:
>>>
>>> Hello,
>>>
>>> kernel test robot noticed "INFO:task_blocked_for_more_than#seconds" on:
>>>
>>> commit: 97450eb909658573dcacc1063b06d3d08642c0c1 ("sched/pelt: Remove shift of thermal clock")
>>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>>
>> First, I'm surprised that an Intel platform is impacted by this patch
>> because Intel doesn't use it AFAIK.
>> Then, this patch mainly remove a right shift i.e.:
>> instead of:
>> return rq_clock_task(rq) >> sched_hw_decay_shift
>> we are now doing:
>> return rq_clock_task(rq)
>>
>> Could it be a false positive ?
>
> Before trying to reproduce it locally, one question is that, should we use
> rq_clock_task(rq) in __update_blocked_others() rather than 'now', which is
> actually calculated by rq_clock_pelt(rq)?
>
> thanks,
> Chenyu
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index d34f6d5b11b5..17ec0c51b29d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9432,7 +9432,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
>
> decayed = update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) |
> update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) |
> - update_hw_load_avg(now, rq, hw_pressure) |
> + update_hw_load_avg(rq_clock_task(rq), rq, hw_pressure) |
> update_irq_load_avg(rq, 0);
>
> if (others_have_blocked(rq))
Yes, update_hw_load_avg() should be driven entirely by
rq_clock_task(rq). But IMHO this PELT signal is only used on some arm64
platforms. So you won't detect any misbehavior running your tests on Intel.
Powered by blists - more mailing lists