lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 27 Feb 2020 10:56:07 -0800
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Qian Cai <cai@....pw>,
        Valentin Schneider <valentin.schneider@....com>,
        Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <a.p.zijlstra@...llo.nl>,
        Juri Lelli <juri.lelli@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        linux-kernel@...r.kernel.org
Subject: Re: suspicious RCU due to "Prefer using an idle CPU as a migration
 target instead of comparing tasks"

On Thu, Feb 27, 2020 at 05:19:34PM +0000, Mel Gorman wrote:
> On Thu, Feb 27, 2020 at 11:47:04AM -0500, Qian Cai wrote:
> > On Thu, 2020-02-27 at 11:35 -0500, Qian Cai wrote:
> > > On Thu, 2020-02-27 at 15:26 +0000, Valentin Schneider wrote:
> > > > On Thu, Feb 27 2020, Qian Cai wrote:
> > > > 
> > > > > On Thu, 2020-02-27 at 09:09 -0500, Qian Cai wrote:
> > > > > > The linux-next commit ff7db0bf24db ("sched/numa: Prefer using an idle CPU as a
> > > > > > migration target instead of comparing tasks") introduced a boot warning,
> > > > > 
> > > > > This?
> > > > > 
> > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > > index a61d83ea2930..ca780cd1eae2 100644
> > > > > --- a/kernel/sched/fair.c
> > > > > +++ b/kernel/sched/fair.c
> > > > > @@ -1607,7 +1607,9 @@ static void update_numa_stats(struct task_numa_env *env,
> > > > > if (ns->idle_cpu == -1)
> > > > > ns->idle_cpu = cpu;
> > > > > 
> > > > > +rcu_read_lock();
> > > > > idle_core = numa_idle_core(idle_core, cpu);
> > > > > +rcu_read_unlock();
> > > > > }
> > > > > }
> > > > > 
> > > > 
> > > > 
> > > > Hmph right, we have
> > > > numa_idle_core()->test_idle_cores()->rcu_dereference().
> > > > 
> > > > Dunno if it's preferable to wrap the entirety of update_numa_stats() or
> > > > if that fine-grained read-side section is ok.
> > > 
> > > I could not come up with a better fine-grained one than this.
> > 
> > Correction -- this one,
> > 
> 
> Thanks for reporting this!
> 
> The proposed fix would be a lot of rcu locks and unlocks. While they are
> cheap, they're not free and it's a fairly standard pattern to acquire
> the rcu lock when scanning CPUs during a domain search (load balancing,
> nohz balance, idle balance etc). While in this context the lock is only
> needed for SMT, I do not think it's worthwhile fine-graining this or
> conditionally acquiring the rcu lock so will we keep it simple?

Indeed, scanning CPUs within a single RCU read-side critical section
should be OK.  As long as each CPU isn't burning too much time.  ;-)

						Thanx, Paul

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 11cdba201425..d34ac4ea5cee 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1592,6 +1592,7 @@ static void update_numa_stats(struct task_numa_env *env,
>  	memset(ns, 0, sizeof(*ns));
>  	ns->idle_cpu = -1;
>  
> +	rcu_read_lock();
>  	for_each_cpu(cpu, cpumask_of_node(nid)) {
>  		struct rq *rq = cpu_rq(cpu);
>  
> @@ -1611,6 +1612,7 @@ static void update_numa_stats(struct task_numa_env *env,
>  			idle_core = numa_idle_core(idle_core, cpu);
>  		}
>  	}
> +	rcu_read_unlock();
>  
>  	ns->weight = cpumask_weight(cpumask_of_node(nid));
>  

Powered by blists - more mailing lists