lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180626192746.GJ3593@linux.vnet.ibm.com>
Date:   Tue, 26 Jun 2018 12:27:47 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Boqun Feng <boqun.feng@...il.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        linux-kernel@...r.kernel.org, mingo@...nel.org,
        jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
        dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
        oleg@...hat.com, joel@...lfernandes.org
Subject: Re: [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0
 being offline

On Tue, Jun 26, 2018 at 07:46:52PM +0800, Boqun Feng wrote:
> On Tue, Jun 26, 2018 at 06:44:47PM +0800, Boqun Feng wrote:
> > On Tue, Jun 26, 2018 at 11:38:20AM +0200, Peter Zijlstra wrote:
> > > On Mon, Jun 25, 2018 at 03:43:32PM -0700, Paul E. McKenney wrote:
> > > > +		preempt_disable();
> > > > +		for_each_leaf_node_possible_cpu(rnp, cpu) {
> > > > +			if (cpu_is_offline(cpu)) /* Preemption disabled. */
> > > > +				continue;
> > > 
> > > Create for_each_node_online_cpu() instead? Seems a bit pointless to
> > > iterate possible mask only to then check it against the online mask.
> > > Just iterate the online mask directly.
> > > 
> > > Or better yet, write this as:
> > > 
> > > 	preempt_disable();
> > > 	cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> > > 	if (cpu > rnp->grphi)
> > > 		cpu = WORK_CPU_UNBOUND;
> > > 	queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> > > 	preempt_enable();
> > > 
> > > Which is what it appears to be doing.
> > > 
> > 
> > Make sense! Thanks ;-)
> > 
> > Applied this and running a TREE03 rcutorture. If all go well, I will
> > send the updated patch.
> > 
> 
> So the patch has passed one 30 min run for TREE03 rcutorture. Paul,
> if it looks good, could you take it for your next spin or pull request
> in the future? Thanks.

I ended up with the following, mostly just rewording the comment and
adding a one-liner on the change.  Does this work for you?

							Thanx, Paul

------------------------------------------------------------------------

commit ef31fa78032536d594630d7bd315d3faf60d98ca
Author: Boqun Feng <boqun.feng@...il.com>
Date:   Fri Jun 15 12:06:31 2018 -0700

    rcu: Make expedited GPs handle CPU 0 being offline
    
    Currently, the parallelized initialization of expedited grace periods uses
    the workqueue associated with each rcu_node structure's ->grplo field.
    This works fine unless that CPU is offline.  This commit therefore
    uses the CPU corresponding to the lowest-numbered online CPU, or just
    reports the quiescent states if there are no online CPUs on this rcu_node
    structure.
    
    Note that this patch uses cpu_is_offline() instead of the usual
    approach of checking bits in the rcu_node structure's ->qsmaskinitnext
    field.  This is safe because preemption is disabled across both the
    cpu_is_offline() check and the call to queue_work_on().
    
    Signed-off-by: Boqun Feng <boqun.feng@...il.com>
    [ paulmck: Disable preemption to close offline race window. ]
    Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
    [ paulmck: Apply Peter Zijlstra feedback on CPU selection. ]

diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index c6385ee1af65..b3df3b770afb 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -472,6 +472,7 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
 static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
 				     smp_call_func_t func)
 {
+	int cpu;
 	struct rcu_node *rnp;
 
 	trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
@@ -493,7 +494,13 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
 			continue;
 		}
 		INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
-		queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
+		preempt_disable();
+		cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
+		/* If all offline, queue the work on an unbound CPU. */
+		if (unlikely(cpu > rnp->grphi))
+			cpu = WORK_CPU_UNBOUND;
+		queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
+		preempt_enable();
 		rnp->exp_need_flush = true;
 	}
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ