[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100201145831.GB19520@laptop>
Date: Tue, 2 Feb 2010 01:58:31 +1100
From: Nick Piggin <npiggin@...e.de>
To: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Cc: Peter Zijlstra <peterz@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
akpm@...ux-foundation.org, Ingo Molnar <mingo@...e.hu>,
linux-kernel@...r.kernel.org,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Steven Rostedt <rostedt@...dmis.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Nicholas Miell <nmiell@...cast.net>, laijs@...fujitsu.com,
dipankar@...ibm.com, josh@...htriplett.org, dvhltc@...ibm.com,
niv@...ibm.com, tglx@...utronix.de, Valdis.Kletnieks@...edu,
dhowells@...hat.com
Subject: Re: [patch 2/3] scheduler: add full memory barriers upon task
switch at runqueue lock/unlock
On Mon, Feb 01, 2010 at 09:47:59AM -0500, Mathieu Desnoyers wrote:
> * Nick Piggin (npiggin@...e.de) wrote:
> > Well I just mean that it's something for -rt to work out. Apps can
> > still work if the call is unsupported completely.
>
> OK, so we seem to be settling for the spinlock-based sys_membarrier()
> this time, which is much less intrusive in terms of scheduler
> fast path modification, but adds more system overhead each time
> sys_membarrier() is called. This trade-off makes sense to me, as we
> expect the scheduler to execute _much_ more often than sys_membarrier().
>
> When I get confirmation that's the route to follow from both of you,
> I'll go back to the spinlock-based scheme for v9.
I think locking or cacheline bouncing DoS is just something we can't
realistically worry too much about in the standard kernel. No further
than just generally good practice of good scalability, avoiding
starvations and long lock hold times etc.
So I would prefer the simpler version that doesn't add overhead to
ctxsw, at least for the first implementation.
Thanks,
Nick
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists