[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20071027142239.0ded1e42.pj@sgi.com>
Date: Sat, 27 Oct 2007 14:22:39 -0700
From: Paul Jackson <pj@....com>
To: David Rientjes <rientjes@...gle.com>
Cc: Lee.Schermerhorn@...com, clameter@....com,
akpm@...ux-foundation.org, ak@...e.de, linux-kernel@...r.kernel.org
Subject: Re: [patch 2/2] cpusets: add interleave_over_allowed option
David wrote:
> I prefer Choice B because it does not force mempolicies to have any
> dependence on cpusets with regard to what nodemask is passed.
Yes, well said.
> It would be very good to store the passed nodemask to set_mempolicy in
> struct mempolicy,
Yes - that's what I'm intending to do.
> If the cpuset has fewer than four nodes, the behavior
> should be undefined (probably implemented to just cycle the set of
> mems_allowed until you reach the fourth entry).
I do intend to implement it as you suggest. See the lib/bitmap.c
routines bitmap_remap() and bitmap_bitremap(), and the nodemask
wrappers for these, nodes_remap() and node_remap(). They will
define the cycling, or I sometimes call it folding.
I would have tended to make this folding a defined part of the API,
though I will grant that the possibility of being lazy and forgetting
to document it seems attractive (less to document ;).
> That [running in a cpuset with fewer nodes than used in a memory policy
> mask] is the result of constraining a task to a cpuset that obviously
> wants access to more nodes -- it's a userspace mistake and abusing
> cpusets so that the task does not get what it expects.
Nah - I wouldn't put it that way. It's no mistake or abuse. It's just
one more example of a kernel making too few resources look sufficient
by sharing, multiplexing and virtualizing them. That's what kernels do.
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@....com> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists