lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190618123318.GG3419@hirez.programming.kicks-ass.net>
Date:   Tue, 18 Jun 2019 14:33:18 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Matt Fleming <matt@...eblueprint.co.uk>
Cc:     "Lendacky, Thomas" <Thomas.Lendacky@....com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "Suthikulpanit, Suravee" <Suravee.Suthikulpanit@....com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Borislav Petkov <bp@...en8.de>
Subject: Re: [PATCH] sched/topology: Improve load balancing on AMD EPYC

On Tue, Jun 18, 2019 at 11:43:19AM +0100, Matt Fleming wrote:
> This works for me under all my tests. Thoughts?
> 
> --->8---
> 
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index 80a405c2048a..4db4e9e7654b 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -8,6 +8,7 @@
>  #include <linux/sched.h>
>  #include <linux/sched/clock.h>
>  #include <linux/random.h>
> +#include <linux/topology.h>
>  #include <asm/processor.h>
>  #include <asm/apic.h>
>  #include <asm/cacheinfo.h>
> @@ -824,6 +825,8 @@ static void init_amd_zn(struct cpuinfo_x86 *c)
>  {
>  	set_cpu_cap(c, X86_FEATURE_ZEN);
>  

I'm thinking this deserves a comment. Traditionally the SLIT table held
relative memory latency. So where the identity is 10, 16 would indicate
1.6 times local latency and 32 would be 3.2 times local.

Now, even very early on BIOS monkeys went about their business and put
in random values in an attempt to 'tune' the system based on how
$random-os behaved, which is all sorts of fu^Wwrong.

Now, I suppose my question is; is that 32 Zen puts in an actual relative
memory latency metric, or a random value we somehow have to deal with.
And can we pretty please describe the whole sordid story behind this
'tunable' somewhere?

> +	node_reclaim_distance = 32;
> +
>  	/* Fix erratum 1076: CPB feature bit not being set in CPUID. */
>  	if (!cpu_has(c, X86_FEATURE_CPB))
>  		set_cpu_cap(c, X86_FEATURE_CPB);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ