lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211117192710.GK641268@paulmck-ThinkPad-P17-Gen-1>
Date:   Wed, 17 Nov 2021 11:27:10 -0800
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Frederic Weisbecker <frederic@...nel.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Uladzislau Rezki <urezki@...il.com>,
        Neeraj Upadhyay <quic_neeraju@...cinc.com>,
        Boqun Feng <boqun.feng@...il.com>,
        Josh Triplett <josh@...htriplett.org>,
        Joel Fernandes <joel@...lfernandes.org>, rcu@...r.kernel.org
Subject: Re: [PATCH 4/6] rcu/nocb: Create nocb kthreads on all CPUs as long
 as the "rcu_nocb=" is passed

On Wed, Nov 17, 2021 at 04:56:35PM +0100, Frederic Weisbecker wrote:
> In order to be able to (de-)offload any CPU using cpuset in the future,
> create a NOCB kthread for all possible CPUs. For now this is done only
> as long as the "rcu_nocb=" kernel parameter is passed to avoid
> the unnecessary overhead for most users.

The "nohz_full=" kernel parameter would also cause these kthreads to
be created, correct?  (Yeah, a nit, but...)

And some fallout of my forgetfulness below.  :-/

							Thanx, Paul

> Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
> Cc: Neeraj Upadhyay <quic_neeraju@...cinc.com>
> Cc: Boqun Feng <boqun.feng@...il.com>
> Cc: Uladzislau Rezki <urezki@...il.com>
> Cc: Josh Triplett <josh@...htriplett.org>
> Cc: Joel Fernandes <joel@...lfernandes.org>
> ---
>  kernel/rcu/tree_nocb.h | 16 +++++++---------
>  1 file changed, 7 insertions(+), 9 deletions(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index 9fe4be10fde7..1871f15b8472 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1221,11 +1221,8 @@ static void rcu_spawn_one_nocb_kthread(int cpu)
>  	struct rcu_data *rdp_gp;
>  	struct task_struct *t;
>  
> -	/*
> -	 * If this isn't a no-CBs CPU or if it already has an rcuo kthread,
> -	 * then nothing to do.
> -	 */
> -	if (!rcu_is_nocb_cpu(cpu) || rdp->nocb_cb_kthread)
> +	/* If it already has an rcuo kthread, then nothing to do. */
> +	if (rdp->nocb_cb_kthread)
>  		return;
>  
>  	/* If we didn't spawn the GP kthread first, reorganize! */
> @@ -1253,7 +1250,7 @@ static void rcu_spawn_one_nocb_kthread(int cpu)
>   */
>  static void rcu_spawn_cpu_nocb_kthread(int cpu)
>  {
> -	if (rcu_scheduler_fully_active)
> +	if (rcu_scheduler_fully_active && rcu_nocb_is_setup)
>  		rcu_spawn_one_nocb_kthread(cpu);
>  }
>  
> @@ -1268,7 +1265,7 @@ static void __init rcu_spawn_nocb_kthreads(void)
>  	int cpu;
>  
>  	if (rcu_nocb_is_setup) {
> -		for_each_online_cpu(cpu)
> +		for_each_possible_cpu(cpu)

Gah...  I had forgotten.  :-/

Some firmware lies about the OS instance's age.  Other firmware lies
about the number of CPUs, sometimes claiming large multiples of the
actual number of CPUs.

So this needs to stay "for_each_online_cpu".  Otherwise, Paul Gortmaker
will once again be afflicted with hundreds of unnecessary rcuo kthreads.

The later calls to rcutree_prepare_cpu() from rcutree_prepare_cpu()
will take care of any CPUs that first come online later.

Apologies for my earlier forgetfulness!!!

>  			rcu_spawn_cpu_nocb_kthread(cpu);
>  	}
>  }
> @@ -1303,7 +1300,7 @@ static void __init rcu_organize_nocb_kthreads(void)
>  	 * Should the corresponding CPU come online in the future, then
>  	 * we will spawn the needed set of rcu_nocb_kthread() kthreads.
>  	 */
> -	for_each_cpu(cpu, rcu_nocb_mask) {
> +	for_each_possible_cpu(cpu) {

This needs to change, but to for_each_online_cpu() instead of
for_each_possible_cpu().  That handles the case where the boot CPU is
not initially offloaded, but where the sysadm later needs to offload it.

>  		rdp = per_cpu_ptr(&rcu_data, cpu);
>  		if (rdp->cpu >= nl) {
>  			/* New GP kthread, set up for CBs & next GP. */
> @@ -1327,7 +1324,8 @@ static void __init rcu_organize_nocb_kthreads(void)
>  				pr_cont(" %d", cpu);
>  		}
>  		rdp->nocb_gp_rdp = rdp_gp;
> -		list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp);
> +		if (cpumask_test_cpu(cpu, rcu_nocb_mask))
> +			list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp);
>  	}
>  	if (gotnocbs && dump_tree)
>  		pr_cont("%s\n", gotnocbscbs ? "" : " (self only)");
> -- 
> 2.25.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ