lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1349815676.7880.85.camel@twins>
Date:	Tue, 09 Oct 2012 22:47:56 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	David Rientjes <rientjes@...gle.com>
Cc:	Tang Chen <tangchen@...fujitsu.com>, mingo@...hat.com,
	miaox@...fujitsu.com, wency@...fujitsu.com,
	linux-kernel@...r.kernel.org, linux-numa@...r.kernel.org
Subject: Re: [PATCH] Do not use cpu_to_node() to find an offlined cpu's node.

On Tue, 2012-10-09 at 13:36 -0700, David Rientjes wrote:
> On Tue, 9 Oct 2012, Peter Zijlstra wrote:
> 
> > On Mon, 2012-10-08 at 10:59 +0800, Tang Chen wrote:
> > > If a cpu is offline, its nid will be set to -1, and cpu_to_node(cpu) will
> > > return -1. As a result, cpumask_of_node(nid) will return NULL. In this case,
> > > find_next_bit() in for_each_cpu will get a NULL pointer and cause panic.
> > 
> > Hurm,. this is new, right? Who is changing all these semantics without
> > auditing the tree and informing all affected people?
> > 
> 
> I've nacked the patch that did it because I think it should be done from 
> the generic cpu hotplug code only at the CPU_DEAD level with a per-arch 
> callback to fixup whatever cpu-to-node mappings they maintain since 
> processes can reenter the scheduler at CPU_DYING.

Well the code they were patching is in the wakeup path. As I think Tang
said, we leave !runnable tasks on whatever cpu they ran on last, even if
that cpu is offlined, we try and fix up state when we get a wakeup.

On wakeup, it tries to find a cpu to run on and will try a cpu of the
same node first.

Now if that node's entirely gone away, it appears the cpu_to_node() map
will not return a valid node number.

I think that's a change in behaviour, it didn't used to do that afaik.
Certainly this code hasn't change in a while.


> The whole issue seems to be because alloc_{fair,rt}_sched_group() does an 
> iteration over all possible cpus (not all online cpus) and does 
> kzalloc_node() which references a now-offlined node.  Changing it to -1 
> makes the slab code fallback to any online node.

Right, that's because the rq structures are assumed always present. What
I cannot remember is why I'm not using per-cpu allocations there,
because that's exactly what it looks like it wants to be.

> What I think we need to do instead of hacking only the acpi code and not 
> standardizing this across the kernel is:

Right, what I don't understand is wtf ACPI has to do with anything. We
have plenty cpu hotplug code, ACPI isn't involved in any of that last
time I checked.

>  - reset cpu-to-node with a per-arch callback in generic cpu hotplug code 
>    at CPU_DEAD, and
> 
>  - do an iteration over all possible cpus for node hot-remove ensuring 
>    there are no stale references.

Why do we need to clear cpu-to-node maps? are we going to change the
topology at runtime? What are you going to do with per-cpu stuff,
per-cpu memory isn't freed on hotplug, so its node relation is static.

/me confused..

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ