lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1332933968.2528.26.camel@twins>
Date:	Wed, 28 Mar 2012 13:26:08 +0200
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Andrea Arcangeli <aarcange@...hat.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Hillf Danton <dhillf@...il.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Dan Smith <danms@...ibm.com>, Paul Turner <pjt@...gle.com>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Rik van Riel <riel@...hat.com>, Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Lee Schermerhorn <Lee.Schermerhorn@...com>, linux-mm@...ck.org,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Mike Galbraith <efault@....de>,
	Bharata B Rao <bharata.rao@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Johannes Weiner <hannes@...xchg.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 11/39] autonuma: CPU follow memory algorithm

On Tue, 2012-03-27 at 18:15 +0200, Andrea Arcangeli wrote:
> This is _purely_ a performance optimization so if my design is so bad,
> and you're also requiring all apps that spans over more than one NUMA
> node to be modified to use your new syscalls, you won't have problems
> to win against AutoNUMA in the benchmarks. 

Right, so can we agree that the only case where they diverge is single
processes that have multiple threads and are bigger than a single node (either
in memory, cputime or both)?

I've asked you several times why you care about that one case so much, but
without answer.

I'll grant you that unmodified such processes might do better with your
stuff, however:

 - your stuff assumes there is a fair amount of locality to exploit.

   I'm not seeing how this is true in general, since data partitioning is hard
   and for those problems where its possible people tend to already do so,
   yielding natural points to add the syscalls.

 - your stuff doesn't actually nest, since a guest kernel has no clue as to
   what constitutes a node (or if there even is such a thing) it will randomly
   move tasks around on the vcpus, with complete disrespect for whatever host
   vcpu<->page mappings you set up.

   guest kernels actively scramble whatever relations you're building by
   scanning, destroying whatever (temporal) locality you think you might
   have found.

 - also, by not exposing NUMA to the guest kernel, the guest kernel/userspace
   has no clue it needs to behave as if there's multiple nodes etc..

Furthermore, most applications that are really big tend to have already thought
about parallelism and have employed things like data-parallelism if at all
possible. If this is not possible (many problems fall in this category) there
really isn't much you can do.

Related to this is that all applications that currently use mbind() and
sched_setaffinity() are trivial to convert.

Also, really big threaded programs have a natural enemy, the shared state that
makes it a process, most dominantly the shared address space (mmap_sem etc..).

There's also the reason Avi mentioned, core count tends to go up, which means
nodes are getting bigger and bigger.

But most importantly, your solution is big, complex and costly specifically to
handle this case which, as per the above reasons, I think is not very
interesting.

So why not do the simple thing first before going overboard for a case that
might be irrelevant?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ