[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150316222611.782cc0e4@grimm.local.home>
Date: Mon, 16 Mar 2015 22:26:11 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Nicholas Miell <nmiell@...cast.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...hat.com>,
Alan Cox <gnomes@...rguk.ukuu.org.uk>,
Lai Jiangshan <laijs@...fujitsu.com>,
Stephen Hemminger <stephen@...workplumber.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Josh Triplett <josh@...htriplett.org>,
Thomas Gleixner <tglx@...utronix.de>,
David Howells <dhowells@...hat.com>
Subject: Re: [RFC PATCH] sys_membarrier(): system/process-wide memory
barrier (x86) (v12)
[ Removed npiggen@...nel.dk as I keep getting bounces from that addr ]
On Tue, 17 Mar 2015 01:45:25 +0000 (UTC)
Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:
> ----- Original Message -----
> > From: "Peter Zijlstra" <peterz@...radead.org>
> > To: "Mathieu Desnoyers" <mathieu.desnoyers@...icios.com>
> > Cc: linux-kernel@...r.kernel.org, "KOSAKI Motohiro" <kosaki.motohiro@...fujitsu.com>, "Steven Rostedt"
> > <rostedt@...dmis.org>, "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>, "Nicholas Miell" <nmiell@...cast.net>,
> > "Linus Torvalds" <torvalds@...ux-foundation.org>, "Ingo Molnar" <mingo@...hat.com>, "Alan Cox"
> > <gnomes@...rguk.ukuu.org.uk>, "Lai Jiangshan" <laijs@...fujitsu.com>, "Stephen Hemminger"
> > <stephen@...workplumber.org>, "Andrew Morton" <akpm@...ux-foundation.org>, "Josh Triplett" <josh@...htriplett.org>,
> > "Thomas Gleixner" <tglx@...utronix.de>, "David Howells" <dhowells@...hat.com>, "Nick Piggin" <npiggin@...nel.dk>
> > Sent: Monday, March 16, 2015 4:54:35 PM
> > Subject: Re: [RFC PATCH] sys_membarrier(): system/process-wide memory barrier (x86) (v12)
Can you please fix your mail client to not include the entire header in
your replies please.
> Let's consider the following memory barrier scenario performed in
> user-space on an architecture with very relaxed ordering. PowerPC comes
> to mind.
>
> https://lwn.net/Articles/573436/
> scenario 12:
>
> CPU 0 CPU 1
> CAO(x) = 1; r3 = CAO(y);
> cmm_smp_wmb(); cmm_smp_rmb();
> CAO(y) = 1; r4 = CAO(x);
>
> BUG_ON(r3 == 1 && r4 == 0)
>
>
> We tweak it to use sys_membarrier on CPU 1, and a simple compiler
> barrier() on CPU 0:
>
> CPU 0 CPU 1
> CAO(x) = 1; r3 = CAO(y);
> barrier(); sys_membarrier();
> CAO(y) = 1; r4 = CAO(x);
>
> BUG_ON(r3 == 1 && r4 == 0)
>
> Now if CPU 1 executes sys_membarrier while CPU 0 is preempted after both
> stores, we have:
>
> CPU 0 CPU 1
> CAO(x) = 1;
> [1st store is slow to
> reach other cores]
> CAO(y) = 1;
> [2nd store reaches other
> cores more quickly]
> [preempted]
> r3 = CAO(y)
> (may see y = 1)
> sys_membarrier()
> Scheduler changes rq->curr.
> skips CPU 0, because rq->curr has
> been updated.
> [return to userspace]
> r4 = CAO(x)
> (may see x = 0)
> BUG_ON(r3 == 1 && r4 == 0) -> fails.
> load_cr3, with implied
> memory barrier, comes
> after CPU 1 has read "x".
>
> The only way to make this scenario work is if a memory barrier is added
> before updating rq->curr. (we could also do a similar scenario for the
> needed barrier after store to rq->curr).
Hmm, I wonder if anything were to break if rq->curr was updated after
the context_switch() call?
Would that help?
this_cpu_write(saved_next, next);
rq = context_switch(rq, prev, next);
rq->curr = this_cpu_read(saved_next);
As I recently found out that this_cpu_read/write() is not that nice on
all architectures, something else may need to be updated. Or we can add
a temp variable on the rq.
rq->saved_next = next;
rq = context_switch(rq, prev, next);
rq->curr = rq->saved_next;
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists