lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87fwd2d2kp.fsf@danplanet.com>
Date:	Tue, 20 Mar 2012 21:01:58 -0700
From:	Dan Smith <danms@...ibm.com>
To:	Andrea Arcangeli <aarcange@...hat.com>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>, Paul Turner <pjt@...gle.com>,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Mike Galbraith <efault@....de>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Bharata B Rao <bharata.rao@...il.com>,
	Lee Schermerhorn <Lee.Schermerhorn@...com>,
	Rik van Riel <riel@...hat.com>,
	Johannes Weiner <hannes@...xchg.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC] AutoNUMA alpha6

AA>         upstream autonuma numasched hard inverse
AA> numa02  64       45       66        42   81
AA> numa01  491      328      607       321  623 -D THREAD_ALLOC
AA> numa01  305      207      338       196  378 -D NO_BIND_FORCE_SAME_NODE

AA> So give me a break... you must have made a real mess in your
AA> benchmarking.

I'm just running what you posted, dude :)

AA> numasched is always doing worse than upstream here, in fact two
AA> times massively worse. Almost as bad as the inverse binds.

Well, something clearly isn't right, because my numbers don't match
yours at all. This time with THP disabled, and compared to the rest of
the numbers from my previous runs:

            autonuma   HARD   INVERSE   NO_BIND_FORCE_SAME_MODE

numa01      366        335    356       377
numa01THP   388        336    353       399

That shows that autonuma is worse than inverse binds here. If I'm
running your stuff incorrectly, please tell me and I'll correct
it. However, I've now compiled the binary exactly as you asked, with THP
disabled, and am seeing surprisingly consistent results.

AA> Maybe you've more than 16g? I've 16G and that leaves 1G free on both
AA> nodes at the peak load with AutoNUMA. That shall be enough for
AA> numasched too (Peter complained me I waste 80MB on a 16G system, so
AA> he can't possibly be intentionally wasting me 2GB).

Yep, 24G here. Do I need to tweak the test?

AA> In any case your results were already _obviously_ broken without me
AA> having to benchmark numasched to verify, because it's impossible
AA> numasched could be 20% faster than autonuma on numa01, because
AA> otherwise it would mean that numasched is like 18% faster than hard
AA> bindings which is mathematically impossible unless your hardware is
AA> not NUMA or superNUMAbroken.

How do you figure? I didn't post any hard binding numbers. In fact,
numasched performed about equal to hard binding...definitely within your
stated 2% error interval. That was with THP enabled, tomorrow I'll be
glad to run them all again without THP.

-- 
Dan Smith
IBM Linux Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ