[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4624633D.4080005@bigpond.net.au>
Date: Tue, 17 Apr 2007 16:03:41 +1000
From: Peter Williams <pwil3058@...pond.net.au>
To: Nick Piggin <npiggin@...e.de>
CC: "Michael K. Edwards" <medwards.linux@...il.com>,
William Lee Irwin III <wli@...omorphy.com>,
Ingo Molnar <mingo@...e.hu>, Matt Mackall <mpm@...enic.com>,
Con Kolivas <kernel@...ivas.org>, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Galbraith <efault@....de>,
Arjan van de Ven <arjan@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair
Scheduler [CFS]
Nick Piggin wrote:
> On Tue, Apr 17, 2007 at 02:25:39PM +1000, Peter Williams wrote:
>> Nick Piggin wrote:
>>> On Mon, Apr 16, 2007 at 04:10:59PM -0700, Michael K. Edwards wrote:
>>>> On 4/16/07, Peter Williams <pwil3058@...pond.net.au> wrote:
>>>>> Note that I talk of run queues
>>>>> not CPUs as I think a shift to multiple CPUs per run queue may be a good
>>>>> idea.
>>>> This observation of Peter's is the best thing to come out of this
>>>> whole foofaraw. Looking at what's happening in CPU-land, I think it's
>>>> going to be necessary, within a couple of years, to replace the whole
>>>> idea of "CPU scheduling" with "run queue scheduling" across a complex,
>>>> possibly dynamic mix of CPU-ish resources. Ergo, there's not much
>>>> point in churning the mainline scheduler through a design that isn't
>>>> significantly more flexible than any of those now under discussion.
>>> Why? If you do that, then your load balancer just becomes less flexible
>>> because it is harder to have tasks run on one or the other.
>>>
>>> You can have single-runqueue-per-domain behaviour (or close to) just by
>>> relaxing all restrictions on idle load balancing within that domain. It
>>> is harder to go the other way and place any per-cpu affinity or
>>> restirctions with multiple cpus on a single runqueue.
>> Allowing N (where N can be one or greater) CPUs per run queue actually
>> increases flexibility as you can still set N to 1 to get the current
>> behaviour.
>
> But you add extra code for that on top of what we have, and are also
> prevented from making per-cpu assumptions.
>
> And you can get N CPUs per runqueue behaviour by having them in a domain
> with no restrictions on idle balancing. So where does your increased
> flexibilty come from?
>
>> One advantage of allowing multiple CPUs per run queue would be at the
>> smaller end of the system scale i.e. a PC with a single hyper threading
>> chip (i.e. 2 CPUs) would not need to worry about load balancing at all
>> if both CPUs used the one runqueue and all the nasty side effects that
>> come with hyper threading would be minimized at the same time.
>
> I don't know about that -- the current load balancer already minimises
> the nasty multi threading effects. SMT is very important for IBM's chips
> for example, and they've never had any problem with that side of it
> since it was introduced and bugs ironed out (at least, none that I've
> heard).
>
There's a lot of ugly code in the load balancer that is only there to
overcome the side effects of SMT and dual core. A lot of it was put
there by Intel employees trying to make load balancing more friendly to
their systems. What I'm suggesting is that an N CPUs per runqueue is a
better way of achieving that end. I may (of course) be wrong but I
think that the idea deserves more consideration than you're willing to
give it.
Peter
--
Peter Williams pwil3058@...pond.net.au
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists