[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <453C660A.1060405@yahoo.com.au>
Date: Mon, 23 Oct 2006 16:49:46 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Paul Jackson <pj@....com>
CC: dino@...ibm.com, akpm@...l.org, mbligh@...gle.com,
menage@...gle.com, Simon.Derr@...l.net,
linux-kernel@...r.kernel.org, rohitseth@...gle.com, holt@....com,
dipankar@...ibm.com, suresh.b.siddha@...el.com
Subject: Re: [RFC] cpuset: add interface to isolated cpus
Paul Jackson wrote:
> Nick wrote:
>
>>These are both part of the same larger solution, which is to
>>partition domains. isolated CPUs are just the case of 1 CPU in
>>its own domain (and that's how they are implemented now).
>
>
> and later, he also wrote:
>
>>I think this is much more of an automatic behind your back thing.
>
>
> I got confused there.
>
> I agree that if we can do a -good- job of it, then an implicit,
> automatic solution is better for the problem of reducing sched domain
> partition sizes on large systems than yet another manual knob.
OK, good.
> But I thought that it was good idea, with general agreement, to provide
> an explicit control of isolated cpus for the real-time folks, even if
> under the covers it use sched domain partitions of size 1 to implement
> it.
If they isolate it by setting the cpus_allowed masks of processes
to reflect the way they'd like balancing to be carried out, then
the partition will be made for them.
But an explicit control might be required anyway, and I wouldn't
disagree with it. It might be required to do more than just sched
partitioning (eg. pdflush and other kernel threads should probably
be made to stay off isolated cpus as well, where possible).
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists