lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200810180647.GN4295@paulmck-ThinkPad-P72>
Date:   Mon, 10 Aug 2020 11:06:47 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     "Joel Fernandes (Google)" <joel@...lfernandes.org>
Cc:     linux-kernel@...r.kernel.org, Davidlohr Bueso <dave@...olabs.net>,
        Jonathan Corbet <corbet@....net>,
        Josh Triplett <josh@...htriplett.org>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        linux-doc@...r.kernel.org,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
        neeraju@...eaurora.org, peterz@...radead.org,
        Randy Dunlap <rdunlap@...radead.org>, rcu@...r.kernel.org,
        Steven Rostedt <rostedt@...dmis.org>, tglx@...utronix.de,
        vineethrp@...il.com
Subject: Re: [PATCH v4 2/5] rcu/tree: Clarify comments about FQS loop
 reporting quiescent states

On Fri, Aug 07, 2020 at 01:07:19PM -0400, Joel Fernandes (Google) wrote:
> At least since v4.19, the FQS loop no longer reports quiescent states
> for offline CPUs unless it is an emergency.
> 
> This commit therefore fixes the comment in rcu_gp_init() to match the
> current code.
> 
> Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> ---
>  kernel/rcu/tree.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index a49fa3b60faa..2fb66cdbfa25 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1701,9 +1701,11 @@ static bool rcu_gp_init(void)
>  
>  	/*
>  	 * Apply per-leaf buffered online and offline operations to the
> -	 * rcu_node tree.  Note that this new grace period need not wait
> -	 * for subsequent online CPUs, and that quiescent-state forcing
> -	 * will handle subsequent offline CPUs.
> +	 * rcu_node tree. Note that this new grace period need not wait for
> +	 * subsequent online CPUs, and that RCU hooks in the CPU offlining
> +	 * path, when combined with checks in this function, will handle CPUs
> +	 * that are currently going offline or that go offline later. Refer to
> +	 * RCU's Requirements documentation about hotplug requirements as well.
>  	 */
>  	rcu_state.gp_state = RCU_GP_ONOFF;
>  	rcu_for_each_leaf_node(rnp) {

Very good!  I pulled this in with light edits as shown below.
Please let me know if I messed something up.

							Thanx, Paul

------------------------------------------------------------------------

commit a6117399840b963f90cc5322ef9ea7c52de639b2
Author: Joel Fernandes (Google) <joel@...lfernandes.org>
Date:   Fri Aug 7 13:07:19 2020 -0400

    rcu/tree: Clarify comments about FQS loop reporting quiescent states
    
    Since at least v4.19, the FQS loop no longer reports quiescent states
    for offline CPUs except in emergency situations.
    
    This commit therefore fixes the comment in rcu_gp_init() to match the
    current code.
    
    Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
    Signed-off-by: Paul E. McKenney <paulmck@...nel.org>

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index bcc6160..59e1943 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1730,10 +1730,13 @@ static bool rcu_gp_init(void)
 	raw_spin_unlock_irq_rcu_node(rnp);
 
 	/*
-	 * Apply per-leaf buffered online and offline operations to the
-	 * rcu_node tree.  Note that this new grace period need not wait
-	 * for subsequent online CPUs, and that quiescent-state forcing
-	 * will handle subsequent offline CPUs.
+	 * Apply per-leaf buffered online and offline operations to
+	 * the rcu_node tree. Note that this new grace period need not
+	 * wait for subsequent online CPUs, and that RCU hooks in the CPU
+	 * offlining path, when combined with checks in this function,
+	 * will handle CPUs that are currently going offline or that will
+	 * go offline later.  Please also refer to "Hotplug CPU" section
+	 * of RCU's Requirements documentation.
 	 */
 	rcu_state.gp_state = RCU_GP_ONOFF;
 	rcu_for_each_leaf_node(rnp) {

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ