[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1338371057.26856.226.camel@twins>
Date: Wed, 30 May 2012 11:44:17 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: David Rientjes <rientjes@...gle.com>
Cc: "Luck, Tony" <tony.luck@...el.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched: Don't try allocating memory from offline nodes
On Tue, 2012-05-29 at 20:21 -0700, David Rientjes wrote:
> On Tue, 29 May 2012, Luck, Tony wrote:
>
> > Index: linux-2.6/kernel/sched/core.c
> > ===================================================================
> > --- linux-2.6.orig/kernel/sched/core.c
> > +++ linux-2.6/kernel/sched/core.c
> > @@ -6449,7 +6449,7 @@ static void sched_init_numa(void)
> > return;
> >
> > for (j = 0; j < nr_node_ids; j++) {
> > - struct cpumask *mask = kzalloc_node(cpumask_size(), GFP_KERNEL, j);
> > + struct cpumask *mask = kzalloc(cpumask_size(), GFP_KERNEL);
> > if (!mask)
> > return;
> >
>
> It's definitely better if we can allocate on the node, though, so perhaps
> do the same thing that I did in
> http://marc.info/?l=linux-kernel&m=133778739503111 by doing
> kzalloc_node(..., node_online(j) ? j : NUMA_NO_NODE)?
This data isn't used overly much, only when rebuilding the sched
domains, so its not performance critical. I only used per-node
allocations because it seemed the right thing to do. If it doesn't work,
I wouldn't bother with making it more complex.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists