lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121116181433.GA4763@gmail.com>
Date:	Fri, 16 Nov 2012 19:14:33 +0100
From:	Ingo Molnar <mingo@...nel.org>
To:	Rik van Riel <riel@...hat.com>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Paul Turner <pjt@...gle.com>,
	Lee Schermerhorn <Lee.Schermerhorn@...com>,
	Christoph Lameter <cl@...ux.com>, Mel Gorman <mgorman@...e.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 5/8] sched, numa, mm: Add adaptive NUMA affinity support


* Rik van Riel <riel@...hat.com> wrote:

> On 11/12/2012 11:04 AM, Peter Zijlstra wrote:
> 
> >We change the load-balancer to prefer moving tasks in order of:
> >
> >   1) !numa tasks and numa tasks in the direction of more faults
> >   2) allow !ideal tasks getting worse in the direction of faults
> >   3) allow private tasks to get worse
> >   4) allow shared tasks to get worse
> >
> >This order ensures we prefer increasing memory locality but when
> >we do have to make hard decisions we prefer spreading private
> >over shared, because spreading shared tasks significantly
> >increases the interconnect bandwidth since not all memory can
> >follow.
> 
> Combined with the fact that we only turn a certain amount of 
> memory into NUMA ptes each second, could this result in a 
> program being classified as a private task one second, and a 
> shared task a few seconds later?

It's a statistical method, like most of scheduling.

It's as prone to oscillation as tasks are already prone to being 
moved spuriously by the load balancer today, due to the per CPU 
load average being statistical and them being slightly above or 
below a critical load average value.
 
Higher freq oscillation should not happen normally though, we 
dampen these metrics and have per CPU hysteresis.

( We can also add explicit hysteresis if anyone demonstrates 
  real oscillation with a real workload - wanted to keep it 
  simple first and change it only as-needed. )

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ