lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220518094112.GE10117@worktop.programming.kicks-ass.net>
Date:   Wed, 18 May 2022 11:41:12 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Ingo Molnar <mingo@...nel.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Valentin Schneider <valentin.schneider@....com>,
        Aubrey Li <aubrey.li@...ux.intel.com>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/4] sched/numa: Adjust imb_numa_nr to a better
 approximation of memory channels

On Wed, May 11, 2022 at 03:30:38PM +0100, Mel Gorman wrote:
> For a single LLC per node, a NUMA imbalance is allowed up until 25%
> of CPUs sharing a node could be active. One intent of the cut-off is
> to avoid an imbalance of memory channels but there is no topological
> information based on active memory channels. Furthermore, there can
> be differences between nodes depending on the number of populated
> DIMMs.
> 
> A cut-off of 25% was arbitrary but generally worked. It does have a severe
> corner cases though when an parallel workload is using 25% of all available
> CPUs over-saturates memory channels. This can happen due to the initial
> forking of tasks that get pulled more to one node after early wakeups
> (e.g. a barrier synchronisation) that is not quickly corrected by the
> load balancer. The LB may fail to act quickly as the parallel tasks are
> considered to be poor migrate candidates due to locality or cache hotness.
> 
> On a range of modern Intel CPUs, 12.5% appears to be a better cut-off
> assuming all memory channels are populated and is used as the new cut-off
> point. A minimum of 1 is specified to allow a communicating pair to
> remain local even for CPUs with low numbers of cores. For modern AMDs,
> there are multiple LLCs and are not affected.

Can the hardware tell us about memory channels?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ