[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1264090637.4283.1178.camel@laptop>
Date: Thu, 21 Jan 2010 17:17:17 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Cc: Steven Rostedt <rostedt@...dmis.org>, linux-kernel@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Oleg Nesterov <oleg@...hat.com>, Ingo Molnar <mingo@...e.hu>,
akpm@...ux-foundation.org, josh@...htriplett.org,
tglx@...utronix.de, Valdis.Kletnieks@...edu, dhowells@...hat.com,
laijs@...fujitsu.com, dipankar@...ibm.com
Subject: Re: [RFC PATCH] introduce sys_membarrier(): process-wide memory
barrier (v5)
On Thu, 2010-01-21 at 11:07 -0500, Mathieu Desnoyers wrote:
>
> One efficient way to fit the requirement of sys_membarrier() would be to
> create spin_lock_mb()/spin_unlock_mb(), which would have full memory
> barriers rather than the acquire/release semantic. These could be used
> within schedule() execution. On UP, they would turn into preempt off/on
> and a compiler barrier, just like normal spin locks.
>
> On architectures like x86, the atomic instructions already imply a full
> memory barrier, so we have a direct mapping and no overhead. On
> architecture where the spin lock only provides acquire semantic (e.g.
> powerpc using lwsync and isync), then we would have to create an
> alternate implementation with "sync".
There's also clear_tsk_need_resched() which is an atomic op.
The thing I'm worrying about is not making schedule() more expensive for
a relatively rare operation like sys_membarrier(), while at the same
time trying to not make while (1) sys_membarrier() ruin your system.
On x86 there is plenty that implies a full mb before rq->curr = next,
the thing to figure out is what is generally the cheapest place to force
one for other architectures.
Not sure where that leaves us, since I'm not too familiar with !x86.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists