[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1210091328030.32588@chino.kir.corp.google.com>
Date: Tue, 9 Oct 2012 13:36:08 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
cc: Tang Chen <tangchen@...fujitsu.com>, mingo@...hat.com,
miaox@...fujitsu.com, wency@...fujitsu.com,
linux-kernel@...r.kernel.org, linux-numa@...r.kernel.org
Subject: Re: [PATCH] Do not use cpu_to_node() to find an offlined cpu's
node.
On Tue, 9 Oct 2012, Peter Zijlstra wrote:
> On Mon, 2012-10-08 at 10:59 +0800, Tang Chen wrote:
> > If a cpu is offline, its nid will be set to -1, and cpu_to_node(cpu) will
> > return -1. As a result, cpumask_of_node(nid) will return NULL. In this case,
> > find_next_bit() in for_each_cpu will get a NULL pointer and cause panic.
>
> Hurm,. this is new, right? Who is changing all these semantics without
> auditing the tree and informing all affected people?
>
I've nacked the patch that did it because I think it should be done from
the generic cpu hotplug code only at the CPU_DEAD level with a per-arch
callback to fixup whatever cpu-to-node mappings they maintain since
processes can reenter the scheduler at CPU_DYING.
The whole issue seems to be because alloc_{fair,rt}_sched_group() does an
iteration over all possible cpus (not all online cpus) and does
kzalloc_node() which references a now-offlined node. Changing it to -1
makes the slab code fallback to any online node.
What I think we need to do instead of hacking only the acpi code and not
standardizing this across the kernel is:
- reset cpu-to-node with a per-arch callback in generic cpu hotplug code
at CPU_DEAD, and
- do an iteration over all possible cpus for node hot-remove ensuring
there are no stale references.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists