lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 18 Aug 2016 10:42:08 -0400
From:	Tejun Heo <tj@...nel.org>
To:	Michael Holzheu <holzheu@...ux.vnet.ibm.com>
Cc:	Heiko Carstens <heiko.carstens@...ibm.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ming Lei <tom.leiming@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	LKML <linux-kernel@...r.kernel.org>,
	Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Martin Schwidefsky <schwidefsky@...ibm.com>
Subject: Re: [bisected] "sched: Allow per-cpu kernel threads to run on online
 && !active" causes warning

Hello, Michael.

On Thu, Aug 18, 2016 at 11:30:51AM +0200, Michael Holzheu wrote:
> Well, "no requirement" this is not 100% correct. Currently we use the
> CPU topology information to assign newly coming CPUs to the "best
> fitting" node.
> 
> Example:
> 
> 1) We have we two fake NUMA nodes N1 and N2 with the following CPU
>    assignment:
> 
>    - N1: cpu 1 on chip 1
>    - N2: cpu 2 on chip 2
> 
> 2) A new cpu 3 is configured that lives on chip 2
> 3) We assign cpu 3 to N2
> 
> We do this only if the nodes are balanced. If N2 had already one more
> cpu than N1 we would assign the new cpu to N1.

I see.  Out of curiosity, what's the purpose of fakenuma on s390?
There don't seem to be any actual memory locality concerns.  Is it
just to segment memory of a machine into multiple pieces?  If so, why
is that necessary, do you hit some scalability issues w/o NUMA nodes?

As for the solution, if blind RR isn't good enough, although it sounds
like it could given that the balancing wasn't all that strong to begin
with, would it be an option to implement an interface which just
requests a new CPU rather than a specific one and then pick one of the
vacant possible CPUs considering node balancing?

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ