[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <jhjeeugvsxr.mognet@arm.com>
Date: Thu, 27 Feb 2020 17:30:40 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Qian Cai <cai@....pw>, Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>, paulmck@...nel.org,
linux-kernel@...r.kernel.org
Subject: Re: suspicious RCU due to "Prefer using an idle CPU as a migration target instead of comparing tasks"
On Thu, Feb 27 2020, Mel Gorman wrote:
> Thanks for reporting this!
>
> The proposed fix would be a lot of rcu locks and unlocks. While they are
> cheap, they're not free and it's a fairly standard pattern to acquire
> the rcu lock when scanning CPUs during a domain search (load balancing,
> nohz balance, idle balance etc). While in this context the lock is only
> needed for SMT, I do not think it's worthwhile fine-graining this or
> conditionally acquiring the rcu lock so will we keep it simple?
>
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 11cdba201425..d34ac4ea5cee 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1592,6 +1592,7 @@ static void update_numa_stats(struct task_numa_env *env,
> memset(ns, 0, sizeof(*ns));
> ns->idle_cpu = -1;
>
> + rcu_read_lock();
> for_each_cpu(cpu, cpumask_of_node(nid)) {
> struct rq *rq = cpu_rq(cpu);
>
> @@ -1611,6 +1612,7 @@ static void update_numa_stats(struct task_numa_env *env,
> idle_core = numa_idle_core(idle_core, cpu);
> }
> }
> + rcu_read_unlock();
>
> ns->weight = cpumask_weight(cpumask_of_node(nid));
>
That's closer to what I was trying to suggest (i.e. broaden the section
rather than reduce it).
Powered by blists - more mailing lists