[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100201164856.GA3486@Krystal>
Date: Mon, 1 Feb 2010 11:48:57 -0500
From: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: akpm@...ux-foundation.org, Ingo Molnar <mingo@...e.hu>,
linux-kernel@...r.kernel.org,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Steven Rostedt <rostedt@...dmis.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Nicholas Miell <nmiell@...cast.net>, laijs@...fujitsu.com,
dipankar@...ibm.com, josh@...htriplett.org, dvhltc@...ibm.com,
niv@...ibm.com, tglx@...utronix.de, peterz@...radead.org,
Valdis.Kletnieks@...edu, dhowells@...hat.com
Subject: Re: [patch 2/3] scheduler: add full memory barriers upon task
switch at runqueue lock/unlock
* Linus Torvalds (torvalds@...ux-foundation.org) wrote:
>
>
> On Mon, 1 Feb 2010, Mathieu Desnoyers wrote:
> >
> > However, this does not deal with mm_cpumask update, and we cannot use
> > the per-cpu rq lock, as it's a process-wide data structure updated with
> > clear_bit/set_bit in switch_mm(). So at the very least, we would have to
> > add memory barriers in switch_mm() on some architectures to deal with
> > this.
>
> I'd much rather have a "switch_mm()" is a guaranteed memory barrier logic,
> because quite frankly, I don't see how it ever couldn't be one anyway. It
> fundamentally needs to do at least a TLB context switch (which may be just
> switching an ASI around, not flushing the whole TLB, of course), and I bet
> that for 99% of all architectures, that is already pretty much guaranteed
> to be equivalent to a memory barrier.
>
> It certainly is for x86. "mov to cr0" is serializing (setting any control
> register except cr8 is serializing). And I strongly suspect other
> architectures will be too.
What we have to be careful about here is that it's not enough to just
rely on switch_mm() containing a memory barrier. What we really need to
enforce is that switch_mm() issues memory barriers both _before_ and
_after_ mm_cpumask modification. The "after" part is usually dealt with
by the TLB context switch, but the "before" part usually isn't.
>
> Btw, one reason to strongly prefer "switch_mm()" over any random context
> switch is that at least it won't affect inter-thread (kernel or user-land)
> switching, including switching to/from the idle thread.
>
> So I'd be _much_ more open to a "let's guarantee that 'switch_mm()' always
> implies a memory barrier" model than to playing clever games with
> spinlocks.
If we really want to make this patch less intrusive, we can consider
iterating on each online cpu in sys_membarrier() rather than on the
mm_cpumask. But it comes at the cost of useless cache-line bouncing on
large machines with few threads running in the process, as we would grab
the rq locks one by one for all cpus.
Thanks,
Mathieu
--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists