lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5bc2d72a-ae27-43f0-893e-afb202abd61b@paulmck-laptop>
Date: Mon, 3 Jun 2024 17:10:54 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
	Valentin Schneider <vschneid@...hat.com>,
	Boqun Feng <boqun.feng@...il.com>,
	Joel Fernandes <joel@...lfernandes.org>,
	Neeraj Upadhyay <neeraj.upadhyay@....com>,
	Uladzislau Rezki <urezki@...il.com>,
	Zqiang <qiang.zhang1211@...il.com>, rcu <rcu@...r.kernel.org>
Subject: Re: [PATCH 5/6] rcu: Remove full memory barrier on RCU stall printout

On Wed, May 15, 2024 at 02:53:31PM +0200, Frederic Weisbecker wrote:
> RCU stall printout fetches the EQS state of a CPU with a preceding full
> memory barrier. However there is nothing to order this read against at
> this debugging stage. It is inherently racy when performed remotely.
> 
> Do a plain read instead.
> 
> This was the last user of rcu_dynticks_snap().
> 
> Signed-off-by: Frederic Weisbecker <frederic@...nel.org>

I went through all of these, and the look good.  Though I am a bit
nervous about this one.  The RCU CPU stall warning code used to be
completely unordered, but the hardware taught me better.  I did not
add these in response to a problem (just lazily used the existing fully
ordered primitive), but you never know.  Me, I would have kept the extra
memory barriers in all six patches because they are not on a fastpath,
but you are quite correct that they are redundant.

So I have queued these, and intend to send them into the next merge
window.  However, you now own vanilla RCU grace-period memory ordering,
both normal and expedited.  As in if someone else breaks it, you already
bought it.  ;-)

							Thanx, Paul

> ---
>  kernel/rcu/tree.c       | 10 ----------
>  kernel/rcu/tree_stall.h |  4 ++--
>  2 files changed, 2 insertions(+), 12 deletions(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 02f6f3483482..04dde7473613 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -299,16 +299,6 @@ static void rcu_dynticks_eqs_online(void)
>  	ct_state_inc(RCU_DYNTICKS_IDX);
>  }
>  
> -/*
> - * Snapshot the ->dynticks counter with full ordering so as to allow
> - * stable comparison of this counter with past and future snapshots.
> - */
> -static int rcu_dynticks_snap(int cpu)
> -{
> -	smp_mb();  // Fundamental RCU ordering guarantee.
> -	return ct_dynticks_cpu_acquire(cpu);
> -}
> -
>  /*
>   * Return true if the snapshot returned from rcu_dynticks_snap()
>   * indicates that RCU is in an extended quiescent state.
> diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h
> index 460efecd077b..4b0e9d7c4c68 100644
> --- a/kernel/rcu/tree_stall.h
> +++ b/kernel/rcu/tree_stall.h
> @@ -501,7 +501,7 @@ static void print_cpu_stall_info(int cpu)
>  	}
>  	delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq);
>  	falsepositive = rcu_is_gp_kthread_starving(NULL) &&
> -			rcu_dynticks_in_eqs(rcu_dynticks_snap(cpu));
> +			rcu_dynticks_in_eqs(ct_dynticks_cpu(cpu));
>  	rcuc_starved = rcu_is_rcuc_kthread_starving(rdp, &j);
>  	if (rcuc_starved)
>  		// Print signed value, as negative values indicate a probable bug.
> @@ -515,7 +515,7 @@ static void print_cpu_stall_info(int cpu)
>  			rdp->rcu_iw_pending ? (int)min(delta, 9UL) + '0' :
>  				"!."[!delta],
>  	       ticks_value, ticks_title,
> -	       rcu_dynticks_snap(cpu) & 0xffff,
> +	       ct_dynticks_cpu(cpu) & 0xffff,
>  	       ct_dynticks_nesting_cpu(cpu), ct_dynticks_nmi_nesting_cpu(cpu),
>  	       rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu),
>  	       data_race(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart,
> -- 
> 2.44.0
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ