lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220611164918.GN1790663@paulmck-ThinkPad-P17-Gen-1>
Date:   Sat, 11 Jun 2022 09:49:18 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Zqiang <qiang1.zhang@...el.com>
Cc:     frederic@...nel.org, rcu@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] rcu/nocb: Avoid polling when myrdp->nocb_head_rdp
 list is empty

On Sat, Jun 11, 2022 at 07:00:44PM +0800, Zqiang wrote:
> Currently, If the 'rcu_nocb_poll' bootargs is enable, all rcuog kthreads
> enter polling mode. however, due to only insert CPU's rdp which belong to
> rcu_nocb_mask to 'nocb_head_rdp' list or all CPU's rdp served by rcuog
> kthread have been de-offloaded, these cause the 'nocb_head_rdp' list
> served by rcuog kthread is empty, when the 'nocb_head_rdp' is empty,
> the rcuog kthread in polling mode not actually do anything. fix it by
> exiting polling mode when the 'nocb_head_rdp'list is empty, otherwise
> entering polling mode.
> 
> Co-developed-by: Frederic Weisbecker <frederic@...nel.org>
> Signed-off-by: Zqiang <qiang1.zhang@...el.com>

Much better, thank you!  One additional question below.

(And we of course need Frederic's "D'accord" before I send this sort of
thing to mainline.)

							Thanx, Paul

> ---
>  v1->v2:
>  Move rcu_nocb_poll flags check from rdp_offload_toggle() to 
>  rcu_nocb_rdp_offload/deoffload(), avoid unnecessary setting of 
>  rdp_gp->nocb_gp_sleep flags, because when rcu_nocb_poll is set  the 
>  rdp_gp->nocb_gp_sleep is not used.
>  
>  v2->v3:
>  When nocb_head_rdp list is empty. put rcuog kthreads in nocb_gp_wq
>  waitqueue to wait offloading.
> 
>  kernel/rcu/tree_nocb.h | 24 +++++++++++++++++++-----
>  1 file changed, 19 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index fa8e4f82e60c..a8f574d8850d 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -584,6 +584,14 @@ static int nocb_gp_toggle_rdp(struct rcu_data *rdp,
>  	return ret;
>  }
>  
> +static void nocb_gp_sleep(struct rcu_data *my_rdp, int cpu)
> +{
> +	trace_rcu_nocb_wake(rcu_state.name, cpu, TPS("Sleep"));
> +	swait_event_interruptible_exclusive(my_rdp->nocb_gp_wq,
> +					!READ_ONCE(my_rdp->nocb_gp_sleep));
> +	trace_rcu_nocb_wake(rcu_state.name, cpu, TPS("EndSleep"));
> +}
> +
>  /*
>   * No-CBs GP kthreads come here to wait for additional callbacks to show up
>   * or for grace periods to end.
> @@ -701,13 +709,19 @@ static void nocb_gp_wait(struct rcu_data *my_rdp)
>  		/* Polling, so trace if first poll in the series. */
>  		if (gotcbs)
>  			trace_rcu_nocb_wake(rcu_state.name, cpu, TPS("Poll"));
> -		schedule_timeout_idle(1);
> +		if (list_empty(&my_rdp->nocb_head_rdp)) {
> +			raw_spin_lock_irqsave(&my_rdp->nocb_gp_lock, flags);
> +			if (!my_rdp->nocb_toggling_rdp)

If this "if" condition is false, what prevents this kthread from being
in a CPU-bound loop?

> +				WRITE_ONCE(my_rdp->nocb_gp_sleep, true);
> +			raw_spin_unlock_irqrestore(&my_rdp->nocb_gp_lock, flags);
> +			/* Wait for any offloading rdp */
> +			nocb_gp_sleep(my_rdp, cpu);
> +		} else {
> +			schedule_timeout_idle(1);
> +		}
>  	} else if (!needwait_gp) {
>  		/* Wait for callbacks to appear. */
> -		trace_rcu_nocb_wake(rcu_state.name, cpu, TPS("Sleep"));
> -		swait_event_interruptible_exclusive(my_rdp->nocb_gp_wq,
> -				!READ_ONCE(my_rdp->nocb_gp_sleep));
> -		trace_rcu_nocb_wake(rcu_state.name, cpu, TPS("EndSleep"));
> +		nocb_gp_sleep(my_rdp, cpu);
>  	} else {
>  		rnp = my_rdp->mynode;
>  		trace_rcu_this_gp(rnp, my_rdp, wait_gp_seq, TPS("StartWait"));
> -- 
> 2.25.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ