[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <462339C0.3070603@bigpond.net.au>
Date: Mon, 16 Apr 2007 18:54:24 +1000
From: Peter Williams <pwil3058@...pond.net.au>
To: Ingo Molnar <mingo@...e.hu>
CC: Willy Tarreau <w@....eu>, Pekka Enberg <penberg@...helsinki.fi>,
Con Kolivas <kernel@...ivas.org>, ck list <ck@....kolivas.org>,
linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Nick Piggin <npiggin@...e.de>, Mike Galbraith <efault@....de>,
Arjan van de Ven <arjan@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair
Scheduler [CFS]
Ingo Molnar wrote:
> * Peter Williams <pwil3058@...pond.net.au> wrote:
>
>> One more quick comment. The claim that there is no concept of time
>> slice in the new scheduler is only true in the sense of the rather
>> arcane implementation of time slices extant in the O(1) scheduler.
>
> yeah. AFAIK most other mainstream OSs also still often use similarly
> 'arcane' concepts (i'm here ignoring literature, you can find everything
> and its opposite suggested in literature) so i felt the need to point
> out the difference ;) After all Linux is about doing a better mainstream
> OS, it is not about beating the OS literature at lunacy ;-)
>
> The precise statement would be: "there's no concept of giving out a
> time-slice to a task and sticking to it unless a higher-prio task comes
> along,
I would have said "no concept of using tile slices to implement nice"
which always seemed strange to me.
If it really does what you just said then a (malicious or otherwise) CPU
intensive task that never sleeps, once it got the CPU, would completely
hog the CPU.
> nor is there a concept of having a low-res granularity
> ->time_slice thing. There is accurate accounting of how much CPU time a
> task used up, and there is a granularity setting that together gives the
> current task a fairness advantage of a given amount of nanoseconds -
> which has similar [but not equivalent] effects to traditional timeslices
> that most mainstream OSs use".
Most traditional OSes have more or less fixed time slices and do the
scheduling by fiddling the dynamic priority.
Using total CPU used will also come to grief when used for long running
tasks. Eventually, even very low bandwidth tasks will accumulate enough
total CPU to look busy. The CPU bandwidth the task is using is what
needs to be controlled.
Or have I not looked closely enough at what sched_granularity_ns does?
Is it really a control for the decay rate of a CPU usage bandwidth metric?
>
>> Your new parameter sched_granularity_ns is equivalent to the concept
>> of time slice in most other kernels that I've peeked inside and
>> computing literature in general (going back over several decades e.g.
>> the magic garden).
>
> note that you can set it to 0 and the box still functions - so
> sched_granularity_ns, while useful for performance/bandwidth workloads,
> isnt truly inherent to the design.
Just like my SPA schedulers. But if you set it to zero you'll get a
fairly high context switch rate with associated overhead, won't you?
>
> So in the announcement i just opted for a short sentence: "there's no
> concept of timeslices", albeit like most short stentences it's not a
> technically 100% accurate statement - but still it conveyed the intended
> information more effectively to the interested lkml reader than the
> longer version could ever have =B-)
I hope that I implied that I was being picky :-) (I meant to -- imply I
was being picky, that is).
Peter
--
Peter Williams pwil3058@...pond.net.au
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists