[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201113085637.GA31601@vingu-book>
Date: Fri, 13 Nov 2020 09:56:37 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Dan Carpenter <dan.carpenter@...cle.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Valentin Schneider <valentin.schneider@....com>,
linux-kernel@...r.kernel.org
Subject: Re: [bug report] sched/fair: Prefer prev cpu in asymmetric wakeup
path
Hi Dan,
Le vendredi 13 nov. 2020 à 11:46:57 (+0300), Dan Carpenter a écrit :
> Hello Vincent Guittot,
>
> The patch b4c9c9f15649: "sched/fair: Prefer prev cpu in asymmetric
> wakeup path" from Oct 29, 2020, leads to the following static checker
> warning:
>
> kernel/sched/fair.c:6249 select_idle_sibling()
> error: uninitialized symbol 'task_util'.
>
> kernel/sched/fair.c
> 6233 static int select_idle_sibling(struct task_struct *p, int prev, int target)
> 6234 {
> 6235 struct sched_domain *sd;
> 6236 unsigned long task_util;
> 6237 int i, recent_used_cpu;
> 6238
> 6239 /*
> 6240 * On asymmetric system, update task utilization because we will check
> 6241 * that the task fits with cpu's capacity.
> 6242 */
>
> The original comment was a bit more clear... Perhaps "On asymmetric
> system[s], [record the] task utilization because we will check that the
> task [can be done within] the cpu's capacity."
The comment "update task utilization because we will check ..." refers to
sync_entity_load_avg()
>
> 6243 if (static_branch_unlikely(&sched_asym_cpucapacity)) {
> 6244 sync_entity_load_avg(&p->se);
> 6245 task_util = uclamp_task_util(p);
> 6246 }
>
> "task_util" is not initialized on the else path.
no need because it will not be used
>
> 6247
> 6248 if ((available_idle_cpu(target) || sched_idle_cpu(target)) &&
> 6249 asym_fits_capacity(task_util, target))
> ^^^^^^^^^
> Uninitialized variable warning.
asym_fits_capacity includes the same condition as above when we set task_util
so task_util can't be used unintialize
static inline bool asym_fits_capacity(int task_util, int cpu)
{
if (static_branch_unlikely(&sched_asym_cpucapacity))
return fits_capacity(task_util, capacity_of(cpu));
return true;
}
>
> 6250 return target;
> 6251
> 6252 /*
> 6253 * If the previous CPU is cache affine and idle, don't be stupid:
> 6254 */
> 6255 if (prev != target && cpus_share_cache(prev, target) &&
> 6256 (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
> 6257 asym_fits_capacity(task_util, prev))
> 6258 return prev;
> 6259
> 6260 /*
> 6261 * Allow a per-cpu kthread to stack with the wakee if the
>
> regards,
> dan carpenter
Powered by blists - more mailing lists