lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150407102147.GJ23123@twins.programming.kicks-ass.net>
Date:	Tue, 7 Apr 2015 12:21:47 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Nishanth Aravamudan <nacc@...ux.vnet.ibm.com>
Cc:	Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
	Boqun Feng <boqun.feng@...ux.vnet.ibm.com>,
	Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
	linuxppc-dev@...ts.ozlabs.org,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	Anton Blanchard <anton@...ba.org>
Subject: Re: Topology updates and NUMA-level sched domains

On Mon, Apr 06, 2015 at 02:45:58PM -0700, Nishanth Aravamudan wrote:
> Hi Peter,
> 
> As you are very aware, I think, power has some odd NUMA topologies (and
> changes to the those topologies) at run-time. In particular, we can see
> a topology at boot:
> 
> Node 0: all Cpus
> Node 7: no cpus
> 
> Then we get a notification from the hypervisor that a core (or two) have
> moved from node 0 to node 7. This results in the:

> or a re-init API (which won't try to reallocate various bits), because
> the topology could be completely different now (e.g.,
> sched_domains_numa_distance will also be inaccurate now).  Really, a
> topology update on power (not sure on s390x, but those are the only two
> archs that return a positive value from arch_update_cpu_topology() right
> now, afaics) is a lot like a hotplug event and we need to re-initialize
> any dependent structures.
> 
> I'm just sending out feelers, as we can limp by with the above warning,
> it seems, but is less than ideal. Any help or insight you could provide
> would be greatly appreciated!

So I think (and ISTR having stated this before) that dynamic cpu<->node
maps are absolutely insane.

There is a ton of stuff that assumes the cpu<->node relation is a boot
time fixed one. Userspace being one of them. Per-cpu memory another.

You simply cannot do this without causing massive borkage.

So please come up with a coherent plan to deal with the entire problem
of dynamic cpu to memory relation and I might consider the scheduler
impact. But we're not going to hack around and maybe make it not crash
in a few corner cases while the entire thing is shite.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ