lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 20 May 2024 16:25:33 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>, Boqun Feng <boqun.feng@...il.com>,
	Joel Fernandes <joel@...lfernandes.org>,
	Neeraj Upadhyay <neeraj.upadhyay@....com>,
	Uladzislau Rezki <urezki@...il.com>,
	Zqiang <qiang.zhang1211@...il.com>, rcu <rcu@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 2/2] rcu/tasks: Further comment ordering around current
 task snapshot on TASK-TRACE

On Mon, May 20, 2024 at 10:41:52PM +0200, Frederic Weisbecker wrote:
> Le Mon, May 20, 2024 at 11:48:54AM -0700, Paul E. McKenney a écrit :
> > On Fri, May 17, 2024 at 05:23:03PM +0200, Frederic Weisbecker wrote:
> > > Comment the current understanding of barriers and locking role around
> > > task snapshot.
> > > 
> > > Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
> > > ---
> > >  kernel/rcu/tasks.h | 18 +++++++++++++++---
> > >  1 file changed, 15 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> > > index 6a9ee35a282e..05413b37dd6e 100644
> > > --- a/kernel/rcu/tasks.h
> > > +++ b/kernel/rcu/tasks.h
> > > @@ -1738,9 +1738,21 @@ static void rcu_tasks_trace_pregp_step(struct list_head *hop)
> > >  	for_each_online_cpu(cpu) {
> > >  		rcu_read_lock();
> > >  		/*
> > > -		 * RQ must be locked because no ordering exists/can be relied upon
> > > -		 * between rq->curr write and subsequent read sides. This ensures that
> > > -		 * further context switching tasks will see update side pre-GP accesses.
> > > +		 * RQ lock + smp_mb__after_spinlock() before reading rq->curr serve
> > > +		 * two purposes:
> > > +		 *
> > > +		 * 1) Ordering against previous tasks accesses (though already enforced
> > > +		 *    by upcoming IPIs and post-gp synchronize_rcu()).
> > > +		 *
> > > +		 * 2) Make sure not to miss latest context switch, because no ordering
> > > +		 *    exists/can be relied upon between rq->curr write and subsequent read
> > > +		 *    sides.
> > > +		 *
> > > +		 * 3) Make sure subsequent context switching tasks will see update side
> > > +		 *    pre-GP accesses.
> > > +		 *
> > > +		 * smp_mb() after reading rq->curr doesn't play a significant role and might
> > > +		 * be considered for removal in the future.
> > >  		 */
> > >  		t = cpu_curr_snapshot(cpu);
> > >  		if (rcu_tasks_trace_pertask_prep(t, true))
> > 
> > How about this for that comment?
> > 
> > 		// Note that cpu_curr_snapshot() picks up the target
> > 		// CPU's current task while its runqueue is locked with an
> > 		// smp_mb__after_spinlock().  This ensures that subsequent
> > 		// tasks running on that CPU will see the updater's pre-GP
> > 		// accesses.
> 
> Right but to achieve that, the smp_mb() was already enough, courtesy of
> the official full barrier on schedule that (this one at least) we could rely on:
> 
> Updater             Reader
> ------             -------
> X = 1              rq->curr = A
>                    // another context switch later
> smp_mb()           smp_mb__after_spin_lock() // right after rq_lock on __schedule()
> READ rq->curr      rq->curr = B
>                    READ X
> 
> If the updater misses A, B will see the update on X.
> 
> So I think we still need to justify the rq locking on the comments.
> 
> >                          The trailng smp_mb() in cpu_curr_snapshot()
> > 		// does not currently play a role other than simplify
> > 		// that function's ordering semantics.  If these simplified
> > 		// ordering semantics continue to be redundant, that smp_mb()
> > 		// might be removed.
> 
> That looks good.
> 
> > 
> > I left out the "ordering agains previous tasks accesses" because,
> > as you say, this ordering is provided elsewhere.
> 
> Right!

Good points!  How about the following?

		// Note that cpu_curr_snapshot() picks up the target
		// CPU's current task while its runqueue is locked with
		// an smp_mb__after_spinlock().  This ensures that either
		// the grace-period kthread will see that task's read-side
		// critical section or the task will see the updater's pre-GP
		// accesses.  The trailng smp_mb() in cpu_curr_snapshot()
		// does not currently play a role other than simplify
		// that function's ordering semantics.  If these simplified
		// ordering semantics continue to be redundant, that smp_mb()
		// might be removed.

Keeping in mind that the commit's log fully lays out the troublesome
scenario.

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ