lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180517142322.GU12217@hirez.programming.kicks-ass.net>
Date:   Thu, 17 May 2018 16:23:22 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:     Mike Galbraith <efault@....de>,
        Matt Fleming <matt@...eblueprint.co.uk>,
        Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
        Michal Hocko <mhocko@...e.com>
Subject: Re: cpu stopper threads and load balancing leads to deadlock

On Thu, May 17, 2018 at 07:03:45AM -0700, Paul E. McKenney wrote:
> On Tue, May 15, 2018 at 06:30:26AM +0200, Mike Galbraith wrote:
> I have not queued it, but given Peter's Signed-off-by and your Tested-by
> I would be happy to do so.

And a Changelog of course :-)

---
From: Peter Zijlstra <peterz@...radead.org>
Subject: rcu/x86: Provide early rcu_cpu_starting() callback

The x86/mtrr code does horrific things because hardware. It uses
stop_machine_from_inactive_cpu(), which does a wakeup (of the stopper
thread on another CPU), which uses RCU, all before the CPU is onlined.

RCU complains about this, because wakeups use RCU and RCU does
(rightfully) not consider offline CPUs for grace-periods.

Fix this by initializing RCU way early in the MTRR case.

Tested-by: Mike Galbraith <efault@....de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---

> > > diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c
> > > index 7468de429087..07360523c3ce 100644
> > > --- a/arch/x86/kernel/cpu/mtrr/main.c
> > > +++ b/arch/x86/kernel/cpu/mtrr/main.c
> > > @@ -793,6 +793,9 @@ void mtrr_ap_init(void)
> > >  
> > >  	if (!use_intel() || mtrr_aps_delayed_init)
> > >  		return;
> > > +
> > > +	rcu_cpu_starting(smp_processor_id());
> > > +
> > >  	/*
> > >  	 * Ideally we should hold mtrr_mutex here to avoid mtrr entries
> > >  	 * changed, but this routine will be called in cpu boot time,
> > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > > index 2a734692a581..4dab46950fdb 100644
> > > --- a/kernel/rcu/tree.c
> > > +++ b/kernel/rcu/tree.c
> > > @@ -3775,6 +3775,8 @@ int rcutree_dead_cpu(unsigned int cpu)
> > >  	return 0;
> > >  }
> > >  
> > > +static DEFINE_PER_CPU(int, rcu_cpu_started);
> > > +
> > >  /*
> > >   * Mark the specified CPU as being online so that subsequent grace periods
> > >   * (both expedited and normal) will wait on it.  Note that this means that
> > > @@ -3796,6 +3798,11 @@ void rcu_cpu_starting(unsigned int cpu)
> > >  	struct rcu_node *rnp;
> > >  	struct rcu_state *rsp;
> > >  
> > > +	if (per_cpu(rcu_cpu_started, cpu))
> > > +		return;
> > > +
> > > +	per_cpu(rcu_cpu_started, cpu) = 1;
> > > +
> > >  	for_each_rcu_flavor(rsp) {
> > >  		rdp = per_cpu_ptr(rsp->rda, cpu);
> > >  		rnp = rdp->mynode;
> > > @@ -3852,6 +3859,8 @@ void rcu_report_dead(unsigned int cpu)
> > >  	preempt_enable();
> > >  	for_each_rcu_flavor(rsp)
> > >  		rcu_cleanup_dying_idle_cpu(cpu, rsp);
> > > +
> > > +	per_cpu(rcu_cpu_started, cpu) = 0;
> > >  }
> > >  
> > >  /* Migrate the dead CPU's callbacks to the current CPU. */
> > 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ