lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1290177793.2109.1612.camel@laptop>
Date:	Fri, 19 Nov 2010 15:43:13 +0100
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Samuel Thibault <samuel.thibault@...-lyon.org>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Mike Galbraith <efault@....de>,
	Hans-Peter Jansen <hpj@...la.net>,
	linux-kernel@...r.kernel.org,
	Lennart Poettering <mzxreary@...inter.de>, david@...g.hm,
	Dhaval Giani <dhaval.giani@...il.com>,
	Vivek Goyal <vgoyal@...hat.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Markus Trippelsdorf <markus@...ppelsdorf.de>,
	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
	Ingo Molnar <mingo@...e.hu>,
	Balbir Singh <balbir@...ux.vnet.ibm.com>
Subject: Re: [RFC/RFT PATCH v3] sched: automated per tty task groups

On Fri, 2010-11-19 at 15:24 +0100, Samuel Thibault wrote:
> Peter Zijlstra, le Fri 19 Nov 2010 12:57:24 +0100, a écrit :
> > On Fri, 2010-11-19 at 01:07 +0100, Samuel Thibault wrote:
> > > Also note that having a hierarchical process structure should permit to
> > > make things globally more efficient: avoid putting e.g. your cpp, cc1,
> > > and asm processes at three corners of your 4-socket NUMA machine :) 
> > 
> > And no, using that to load-balance between CPUs doesn't necessarily help
> > with the NUMA case,
> 
> It doesn't _necessarily_ help, but it should help in quite a few cases.

Colour me unconvinced, measuring shared cache footprint using PMUs might
help (and people have actually implemented and played with that at
various times in the past) but again, the added overhead of doing so
will hurt a lot more workloads than might benefit.

> > load-balancing is an impossible job (equivalent to
> > page-replacement -- you simply don't know the future), applications
> > simply do wildly weird stuff. 
> 
> Sure. Not a reason not to get the low-hanging fruits :)

I'm not at all convinced using the process hierarchy will really help
much, but feel free to write the patch and test it. But making the
migration condition very complex will definitely hurt some workloads.

> > From a process hierarchy there's absolutely no difference between a
> > cc1/cpp/asm and some MPI jobs, both can be parent-child relations with
> > pipes between, some just run short and have data affinity, others run
> > long and don't have any.
> 
> MPI jobs typically communicate with each other. Keeping them on the same
> socket permits to keep shared-memory MPI drivers to mostly remain in
> e.g. the L3 cache. That typically gives benefits.

Pushing them away permits them to use a larger part of that same L3
cache allowing them to work on larger data sets. Most of the MPI apps
have a large compute to communication ratio because that is what allows
them to run in parallel so well (traditionally the interconnects were
terribly slow to boot), that suggests that working on larger data sets
is a good thing and running on the same node really doesn't matter since
communication is assumes slow anyway.

There really is no simple solution to his.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ