[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1187210893.5422.60.camel@localhost>
Date: Wed, 15 Aug 2007 16:48:13 -0400
From: Lee Schermerhorn <Lee.Schermerhorn@...com>
To: Christoph Lameter <clameter@....com>
Cc: "Serge E. Hallyn" <serge@...lyn.com>,
Dhaval Giani <dhaval@...ux.vnet.ibm.com>, bob.picco@...com,
nacc@...ibm.com, kamezawa.hiroyu@...fujitsu.com, mel@...net.ie,
akpm@...ux-foundation.org, Balbir Singh <balbir@...ibm.com>,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
lkml <linux-kernel@...r.kernel.org>,
ckrm-tech <ckrm-tech@...ts.sourceforge.net>
Subject: Re: Regression in 2.6.23-rc2-mm2, mounting cpusets causes a hang
On Wed, 2007-08-15 at 13:36 -0700, Christoph Lameter wrote:
> On Wed, 15 Aug 2007, Lee Schermerhorn wrote:
>
> > > So its always true for node 0. The "bit" is set.
> >
> > The issue is with the N_*_MEMORY masks. They don't get initialized
> > properly because node_set_state() is a no-op if !NUMA. So, where we
> > look for intersections with or where we AND with the N_*_MEMORY masks we
> > get the empty set.
>
> That is intentional. Memory is always present if you are on !NUMA. You can
> simply use a default nodemask where only node 0 is set. That is what the
> fallback provides. Maybe it does not provide the right thing for cpusets?
>
> > > We are trying to get cpusets to work with !NUMA?
> >
> > Well, yes. In Serge's case, he's trying to use cpusets with !NUMA.
> > He'll have to comment on the reasons for that. Looking at all of the
> > #ifdefs and init/Kconfig, CPUSET does not depend on NUMA--only SMP and
> > CONTAINERS [altho' methinks CPUSET should select CONTAINERS rather than
> > depend on it...]. So, you can use cpusets to partition of cpus in
> > non-NUMA configs.
>
> Looks like we need to fix cpuset nodemasks for the !NUMA case then?
> It cannot expect to find valid nodemasks if !NUMA.
Well, OK. But Paul really hates #ifdefs in kernel/cpusets.c. He's
asked me to remove them before, so I avoided them here. Cpusets really
should use only nodes with memory--i.e., the N_HIGH_MEMORY state.
>
> > In the more general case, tho', I'm looking at all uses of the
> > node_online_map and for_each_online_node, for instances where they
> > should be replaced with one of the *_MEMORY masks. IMO, generic code
> > that is compiled independent of any CONFIG option, like NUMA, should
> > just work, independent of the config. Currently, as Serge has shown,
>
> AFAIK this works except for cpusets.
So far. I'm replacing other usage of node_online_map with the
N_HIGH_MEMORY mask, as we discussed. I should have that patch ready to
post tomorrow.
>
> > this is not the case. So, I think we should fix the *_MEMORY maps to be
> > correctly populated in both the NUMA and !NUMA cases. A couple of
> > options:
>
> There is no point in having a variable if you know the results because of
> !NUMA. That is the way nodemask.h has always operated.
But, the mask--the N_HIGH_MEMORY array element, that is--is there for
both NUMA and !NUMA [== N_NORMAL_MEMORY for !CONFIG_HIGHMEM]. We just
don't initialize it for the !NUMA case, currently.
>
> > Thoughts?
>
> Lets get either rid of the definitions for the nodemasks in the !NUMA
> case or fix their contents to have the right constant value expected in
> cpusets.
That's what the patch I posted today [option 2] does--statically
initializes the N_*_MEMORY and N_CPU masks to indicate that node 0
exists. Serge and Dhaval have tested it on their platform and it solves
the cpuset mount problem.
Lee
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists