[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ecbf5317-e6cf-fc20-9871-4ea06a987952@arm.com>
Date: Tue, 18 Feb 2020 13:37:45 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Vincent Guittot <vincent.guittot@...aro.org>, mingo@...hat.com,
peterz@...radead.org, juri.lelli@...hat.com, rostedt@...dmis.org,
bsegall@...gle.com, mgorman@...e.de, linux-kernel@...r.kernel.org
Cc: pauld@...hat.com, parth@...ux.ibm.com, valentin.schneider@....com,
hdanton@...a.com
Subject: Re: [PATCH v2 2/5] sched/numa: Replace runnable_load_avg by load_avg
On 14/02/2020 16:27, Vincent Guittot wrote:
[...]
> /*
> * The load is corrected for the CPU capacity available on each node.
> *
> @@ -1788,10 +1831,10 @@ static int task_numa_migrate(struct task_struct *p)
> dist = env.dist = node_distance(env.src_nid, env.dst_nid);
> taskweight = task_weight(p, env.src_nid, dist);
> groupweight = group_weight(p, env.src_nid, dist);
> - update_numa_stats(&env.src_stats, env.src_nid);
> + update_numa_stats(&env, &env.src_stats, env.src_nid);
This looks strange. Can you do:
-static void update_numa_stats(struct task_numa_env *env,
+static void update_numa_stats(unsigned int imbalance_pct,
struct numa_stats *ns, int nid)
- update_numa_stats(&env, &env.src_stats, env.src_nid);
+ update_numa_stats(env.imbalance_pct, &env.src_stats, env.src_nid);
[...]
> +static unsigned long cpu_runnable_load(struct rq *rq)
> +{
> + return cfs_rq_runnable_load_avg(&rq->cfs);
> +}
> +
Why not remove cpu_runnable_load() in this patch rather moving it?
kernel/sched/fair.c:5492:22: warning: ‘cpu_runnable_load’ defined but
not used [-Wunused-function]
static unsigned long cpu_runnable_load(struct rq *rq)
Powered by blists - more mailing lists