lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150217180154.GA28082@linux.vnet.ibm.com>
Date:	Tue, 17 Feb 2015 10:01:54 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Kirill Tkhai <ktkhai@...allels.com>, linux-kernel@...r.kernel.org,
	Ingo Molnar <mingo@...hat.com>,
	Josh Poimboeuf <jpoimboe@...hat.com>, oleg@...hat.com
Subject: Re: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking
 cycles

On Tue, Feb 17, 2015 at 08:05:32AM -0800, Paul E. McKenney wrote:
> On Tue, Feb 17, 2015 at 02:05:23PM +0100, Peter Zijlstra wrote:
> > On Tue, Feb 17, 2015 at 01:12:58PM +0100, Peter Zijlstra wrote:
> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -341,6 +341,22 @@ static struct rq *task_rq_lock(struct ta
> > >  		raw_spin_lock_irqsave(&p->pi_lock, *flags);
> > >  		rq = task_rq(p);
> > >  		raw_spin_lock(&rq->lock);
> > > +		/*
> > > +		 *	move_queued_task()		task_rq_lock()
> > > +		 *
> > > +		 *	ACQUIRE (rq->lock)
> > > +		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
> > > +		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
> > > +		 *	[S] ->cpu = new_cpu		[L] task_rq()
> > > +		 *					[L] ->on_rq
> > > +		 *	RELEASE (rq->lock)
> > > +		 *
> > > +		 * If we observe the old cpu in task_rq_lock, the acquire of
> > > +		 * the old rq->lock will fully serialize against the stores.
> > > +		 *
> > > +		 * If we observe the new cpu in task_rq_lock, the acquire will
> > > +		 * pair with the WMB to ensure we must then also see migrating.
> > > +		 */
> > >  		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
> > >  			return rq;
> > >  		raw_spin_unlock(&rq->lock);
> > 
> > Hey Paul, remember this: https://lkml.org/lkml/2014/7/16/310
> 
> I do now.  ;-)
> 
> > I just used a creative one :-)
> 
> The scenario above?
> 
> > BTW, should we attempt to include that table in memory-barriers.txt like
> > Mathieu said? As a cheat sheet with references to longer explanations
> > for the 'interesting' ones?
> > 
> > FWIW, we should probably update that table to include control
> > dependencies too; we didn't (formally) have those back then I think.
> > 
> > The blob under SMP BARRIER PAIRING does not mention pairing with control
> > dependencies; and I'm rather sure I've done so.

And here is a patch for the control-dependency pairing.  Thoughts?

							Thanx, Paul

------------------------------------------------------------------------

documentation: Clarify control-dependency pairing

This commit explicitly states that control dependencies pair normally
with other barriers, and gives an example of such pairing.

Reported-by: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>

diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index ca2387ef27ab..b09880086d96 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -592,9 +592,9 @@ See also the subsection on "Cache Coherency" for a more thorough example.
 CONTROL DEPENDENCIES
 --------------------
 
-A control dependency requires a full read memory barrier, not simply a data
-dependency barrier to make it work correctly.  Consider the following bit of
-code:
+A load-load control dependency requires a full read memory barrier, not
+simply a data dependency barrier to make it work correctly.  Consider the
+following bit of code:
 
 	q = ACCESS_ONCE(a);
 	if (q) {
@@ -615,14 +615,15 @@ case what's actually required is:
 	}
 
 However, stores are not speculated.  This means that ordering -is- provided
-in the following example:
+for load-store control dependencies, as in the following example:
 
 	q = ACCESS_ONCE(a);
 	if (q) {
 		ACCESS_ONCE(b) = p;
 	}
 
-Please note that ACCESS_ONCE() is not optional!  Without the
+Control dependencies pair normally with other types of barriers.
+That said, please note that ACCESS_ONCE() is not optional!  Without the
 ACCESS_ONCE(), might combine the load from 'a' with other loads from
 'a', and the store to 'b' with other stores to 'b', with possible highly
 counterintuitive effects on ordering.
@@ -813,6 +814,8 @@ In summary:
       barrier() can help to preserve your control dependency.  Please
       see the Compiler Barrier section for more information.
 
+  (*) Control dependencies pair normally with other types of barriers.
+
   (*) Control dependencies do -not- provide transitivity.  If you
       need transitivity, use smp_mb().
 
@@ -850,6 +853,19 @@ Or:
 			      <data dependency barrier>
 			      y = *x;
 
+Or even:
+
+	CPU 1		      CPU 2
+	===============	      ===============================
+	r1 = ACCESS_ONCE(y);
+	<write barrier>
+	ACCESS_ONCE(y) = 1;   if (r2 = ACCESS_ONCE(x)) {
+			         <implicit control dependency>
+			         ACCESS_ONCE(y) = 1;
+			      }
+
+	assert(r1 == 0 || r2 == 0);
+
 Basically, the read barrier always has to be there, even though it can be of
 the "weaker" type.
 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ