lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1706271604550.1798@nanos>
Date:   Tue, 27 Jun 2017 16:21:02 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>
cc:     Borislav Petkov <bp@...en8.de>, x86@...nel.org,
        linux-kernel@...r.kernel.org, leo.duran@....com,
        yazen.ghannam@....com, Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 1/2] x86/CPU/AMD: Present package as die instead of
 socket

On Tue, 27 Jun 2017, Suravee Suthikulpanit wrote:
> On 6/27/17 17:48, Borislav Petkov wrote:
> > On Tue, Jun 27, 2017 at 01:40:52AM -0500, Suravee Suthikulpanit wrote:
> > > However, this is not the case on AMD family17h multi-die processor
> > > platforms, which can have up to 4 dies per socket as shown in the
> > > following system topology.
> > 
> > So what exactly does that mean? A die is a package on ZN and you can have up
> > to 4 packages on a physical socket?
> 
> Yes. 4 packages (or 4 dies, or 4 NUMA nodes) in a socket.

And why is this relevant at all?

The kernel does not care about sockets. Sockets are electromechanical
components and completely irrelevant.

The kernel cares about :

    Threads	 - Single scheduling unit

    Cores	 - Contains one or more threads

    Packages	 - Contains one or more cores. The cores share L3.
    
    NUMA Node	 - Contains one or more Packages which share a memory
    	 	   controller.

		   I'm not aware of x86 systems which have several Packages
		   sharing a memory controller, so Package == NUMA Node
		   (but I might be wrong here).

    Platform	 - Contains one or more Numa Nodes

All the kernel is interested in is the above and the NUMA Node distance so
it knows about memory access latencies. No sockets, no MCMs, that's all
completely useless for the scheduler.

So if the current CPUID stuff gives you the same phycial package ID for all
packages in a MCM, then this needs to be fixed at the CPUID/ACPI/BIOS level
and not hacked around in the kernel.

The only reason why a MCM might need it's own ID is, when it contains
infrastructure which is shared between the packages, but again that's
irrelevant for the scheduler. That'd be only relevant to implement a driver
for that shared infrastructure.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ