[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <453C4E22.9000308@yahoo.com.au>
Date: Mon, 23 Oct 2006 15:07:46 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Paul Jackson <pj@....com>
CC: dino@...ibm.com, akpm@...l.org, mbligh@...gle.com,
menage@...gle.com, Simon.Derr@...l.net,
linux-kernel@...r.kernel.org, rohitseth@...gle.com, holt@....com,
dipankar@...ibm.com, suresh.b.siddha@...el.com
Subject: Re: [RFC] cpuset: add interface to isolated cpus
Paul Jackson wrote:
> Dinakar wrote:
>
>>IMO this patch addresses just one of the requirements for partitionable
>>sched domains
>
>
> Correct - this particular patch was just addressing one of these.
>
> Nick raised the reasonable concern that this patch was adding something
> to cpusets that was not especially related to cpusets.
Did you send resend the patch to remove sched-domain partitioning?
After clearing up my confusion, IMO that is needed and could probably
go into 2.6.19.
> So I will not be sending this patch to Andrew for *-mm.
>
> There are further opportunities for improvements in some of this code,
> which my colleague Christoph Lameter may be taking an interest in.
> Ideally kernel-user API's for isolating and partitioning sched domains
> would arise from that work, though I don't know if we can wait that
> long.
The sched-domains code is all there and just ready to be used. IMO
using the cpusets API (or a slight extension thereof) would be the
best idea if we're going to use any explicit interface at all.
A cool option would be to determine the partitions according to the
disjoint set of unions of cpus_allowed masks of all tasks. I see this
getting computationally expensive though, probably O(tasks*CPUs)... I
guess that isn't too bad.
Might be better than a userspace interface.
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists