[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0e29059a-7f1c-523d-c3ec-e17bbc094af9@linux.dev>
Date: Fri, 18 Aug 2023 21:17:32 +0800
From: Chengming Zhou <chengming.zhou@...ux.dev>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: mingo@...hat.com, peterz@...radead.org, ycliang@...estech.com,
juri.lelli@...hat.com, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com,
zhouchengming@...edance.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/fair: Fix cfs_rq_is_decayed() on !SMP
On 2023/8/18 20:25, Vincent Guittot wrote:
> On Fri, 18 Aug 2023 at 13:37, <chengming.zhou@...ux.dev> wrote:
>>
>> From: Chengming Zhou <zhouchengming@...edance.com>
>>
>> We don't need to maintain per-queue leaf_cfs_rq_list on !SMP, since
>> it's used for cfs_rq load tracking & balance on SMP.
>>
>> But sched debug interface use it to print per-cfs_rq stats, which
>> maybe better to change to use walk_tg_tree_from() instead.
>>
>> This patch just fix the !SMP version cfs_rq_is_decayed(), so the
>> per-queue leaf_cfs_rq_list is also maintained correctly on !SMP,
>> to fix the warning in assert_list_leaf_cfs_rq().
>>
>> Fixes: 0a00a354644e ("sched/fair: Delete useless condition in tg_unthrottle_up()")
>> Reported-by: Leo Liang <ycliang@...estech.com>
>> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
>> ---
>> kernel/sched/fair.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index a80a73909dc2..00ef7e86a95b 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -4654,6 +4654,8 @@ static inline void update_misfit_status(struct task_struct *p, struct rq *rq)
>>
>> static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
>> {
>> + if (cfs_rq->load.weight)
>> + return false;
>> return true;
>
> Why not :
>
> return !(cfs_rq->nr_running);
>
> The above seems easier to understand although I agree that both do the
> same thing at the end
>
Yes, this is better.
Thanks.
Powered by blists - more mailing lists