lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 25 Jun 2018 15:43:32 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     linux-kernel@...r.kernel.org
Cc:     mingo@...nel.org, jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
        rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
        fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org,
        Boqun Feng <boqun.feng@...il.com>
Subject: [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline

From: Boqun Feng <boqun.feng@...il.com>

Currently, the parallelized initialization of expedited grace periods uses
the workqueue associated with each rcu_node structure's ->grplo field.
This works fine unless that CPU is offline.  This commit therefore
uses the CPU corresponding to the lowest-numbered online CPU, or just
reports the quiescent states if there are no online CPUs on this rcu_node
structure.

Note that this patch uses cpu_is_offline() instead of the usual
approach of checking bits in the rcu_node structure's ->qsmaskinitnext
field.  This is safe because preemption is disabled across both the
cpu_is_offline() check and the call to queue_work_on().

Not-Yet-Signed-off-by: Boqun Feng <boqun.feng@...il.com>
[ paulmck: Disable preemption to close offline race window. ]
Not-Yet-Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
---
 kernel/rcu/tree_exp.h | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index c6385ee1af65..6acac74092cb 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -472,6 +472,7 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
 static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
 				     smp_call_func_t func)
 {
+	int cpu;
 	struct rcu_node *rnp;
 
 	trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
@@ -493,8 +494,19 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
 			continue;
 		}
 		INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
-		queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
-		rnp->exp_need_flush = true;
+		preempt_disable();
+		for_each_leaf_node_possible_cpu(rnp, cpu) {
+			if (cpu_is_offline(cpu)) /* Preemption disabled. */
+				continue;
+			queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
+			rnp->exp_need_flush = true;
+			break;
+		}
+		preempt_enable();
+		if (!rnp->exp_need_flush) { /* All offline, report QSes. */
+			queue_work(rcu_par_gp_wq, &rnp->rew.rew_work);
+			rnp->exp_need_flush = true;
+		}
 	}
 
 	/* Wait for workqueue jobs (if any) to complete. */
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ