lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aa369ee2-6557-8e3f-e2b4-d347b27825a6@linux.intel.com>
Date:   Thu, 29 Mar 2018 06:45:12 -0700
From:   Dave Hansen <dave.hansen@...ux.intel.com>
To:     Thomas Gleixner <tglx@...utronix.de>,
        Alison Schofield <alison.schofield@...el.com>
Cc:     Ingo Molnar <mingo@...nel.org>, Tony Luck <tony.luck@...el.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        "H. Peter Anvin" <hpa@...ux.intel.com>,
        Borislav Petkov <bp@...en8.de>,
        Peter Zijlstra <peterz@...radead.org>,
        David Rientjes <rientjes@...gle.com>,
        Igor Mammedov <imammedo@...hat.com>,
        Prarit Bhargava <prarit@...hat.com>, brice.goglin@...il.com,
        x86@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] x86,sched: allow topologies where NUMA nodes share an
 LLC

On 03/29/2018 06:16 AM, Thomas Gleixner wrote:
>> This is OK at least on the hardware we are immediately concerned about
>> because the LLC sharing happens at both the slice and at the package
>> level, which are also NUMA boundaries.
> So that addresses the scheduler interaction, but it still leaves the
> information in the sysfs files unchanged. See cpu/intel_cacheinfo.c.  There
> are applications which use that information so it should be correct.

Were you thinking of shared_cpu_list/map?  The information in there is
correct for core->off-package access.  It is not correct for
core->on-package access, unless that access is perfectly interleaved
across both package "slices".

We could try to add an attribute or two to clarify this situation.  But,
similar to the CPUID leaves, I don't think we actually have a precise
way to describe the way the cache actually works here.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ