lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 20 Mar 2012 08:31:47 +0100
From:	Ingo Molnar <mingo@...nel.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Christoph Lameter <cl@...ux.com>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Avi Kivity <avi@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>, Paul Turner <pjt@...gle.com>,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Mike Galbraith <efault@....de>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Dan Smith <danms@...ibm.com>,
	Bharata B Rao <bharata.rao@...il.com>,
	Lee Schermerhorn <Lee.Schermerhorn@...com>,
	Rik van Riel <riel@...hat.com>,
	Johannes Weiner <hannes@...xchg.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC][PATCH 00/26] sched/numa


* Linus Torvalds <torvalds@...ux-foundation.org> wrote:

> On Mon, Mar 19, 2012 at 1:28 PM, Ingo Molnar <mingo@...nel.org> wrote:
> >
> > That having said PeterZ's numbers showed some pretty good
> > improvement for the streams workload:
> >
> >  before: 512.8M
> >  after: 615.7M
> >
> > i.e. a +20% improvement on a not very heavily NUMA box.
> 
> Well, streams really isn't a very interesting benchmark. It's 
> the traditional single-threaded cpu-only thing that just 
> accesses things linearly, and I'm not convinced the numbers 
> should be taken to mean anything at all.

Yeah, I considered it the 'ideal improvement' for memory-bound, 
private-working-set workloads on commodity hardware - i.e. the 
upper envelope of anything that might matter. We don't know the 
worst-case regression percentage, nor the median improvement - 
which might very well be a negative number.

More fundamentally we don't even know whether such access 
patterns matter at all.

> The HPC people want to multi-thread things these days, and 
> "cpu/memory affinity" is a lot less clear then.
> 
> So I can easily imagine that the performance improvement is 
> real, but I really don't think "streams improves by X %" is 
> all that interesting. Are there any more relevant loads that 
> actually matter to people that we could show improvement on?

That would be interesting to see.

I could queue this up in a topical branch in a pure opt-in 
fashion, to make it easier to test.

Assuming there will be real improvements on real workloads, do 
you have any fundamental objections against the 'home node' 
concept itself and its placement into mm_struct? I think it 
makes sense and mm_struct is the most logical place to host it.

The rest looks rather non-controversial to me, apps that want 
more memory affinity should get it and both the VM and the 
scheduler should help achieve that goal, within memory and CPU 
allocation constraints.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ