lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 28 Jun 2013 17:12:56 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Mel Gorman <mgorman@...e.de>
Cc:	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...nel.org>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Linux-MM <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 7/8] sched: Split accounting of NUMA hinting faults that
 pass two-stage filter

On Fri, Jun 28, 2013 at 03:29:25PM +0100, Mel Gorman wrote:
> > Oh duh indeed. I totally missed it did that. Changelog also isn't giving
> > rationale for this. Mel?
> > 
> 
> There were a few reasons
> 
> First, if there are many tasks sharing the page then they'll all move towards
> the same node. The node will be compute overloaded and then scheduled away
> later only to bounce back again. Alternatively the shared tasks would
> just bounce around nodes because the fault information is effectively
> noise. Either way I felt that accounting for shared faults with private
> faults would be slower overall.
> 
> The second reason was based on a hypothetical workload that had a small
> number of very important, heavily accessed private pages but a large shared
> array. The shared array would dominate the number of faults and be selected
> as a preferred node even though it's the wrong decision.
> 
> The third reason was because multiple threads in a process will race
> each other to fault the shared page making the information unreliable.
> 
> It is important that *something* be done with shared faults but I haven't
> thought of what exactly yet. One possibility would be to give them a
> different weight, maybe based on the number of active NUMA nodes, but I had
> not tested anything yet. Peter suggested privately that if shared faults
> dominate the workload that the shared pages would be migrated based on an
> interleave policy which has some potential.
> 

It would be good to put something like this in the Changelog, or even as
a comment near how we select the preferred node.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ