lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Sep 2020 10:35:15 +0200
From:   Borislav Petkov <bp@...en8.de>
To:     Yazen Ghannam <yazen.ghannam@....com>
Cc:     linux-edac@...r.kernel.org, linux-kernel@...r.kernel.org,
        tony.luck@...el.com, x86@...nel.org,
        Smita.KoralahalliChannabasappa@....com
Subject: Re: [PATCH v2 1/8] x86/CPU/AMD: Save NodeId on AMD-based systems

On Mon, Sep 14, 2020 at 02:20:39PM -0500, Yazen Ghannam wrote:
> Yes, that's right.
> 
> I called it "node_id" based on the AMD documentation and what it's
> called today in the Linux code. It's called other things like nb_id and
> nid too.
> 
> I think we can call it something else to avoid confusion with NUMA nodes
> if that'll help.

Yes, whatever we end up calling it, it should be added to that
documentation file I pointed you at. Because months and years from now,
it'll be the only place we look first, before changing the topology
again.

> Yes, you're right. The AMD documentation is different, so I'll try to
> stick with the Linux documentation and qualify names with "AMD" when
> noting the usage by the AMD docs.

Thanks, yes, because Linux is trying to map its view of the topology to
the vendor's and model all vendors properly, if possible.

> There's one DF/NB per package and it's a fixed value, i.e. it shouldn't
> change based on the NUMA configuration.

Aha, so the NB kinda serves the package and is part of it. That makes a
lot of sense and clears some confusion.

> Here's an example of a 2 socket Naples system with 4 packages per socket
> and setup to have 1 NUMA node. The "node_id" value is the AMD NodeId
> from CPUID.
> 
> CPU=0 phys_proc_id=0 node_id=0 cpu_to_node()=0
> CPU=8 phys_proc_id=0 node_id=1 cpu_to_node()=0
> CPU=16 phys_proc_id=0 node_id=2 cpu_to_node()=0
> CPU=24 phys_proc_id=0 node_id=3 cpu_to_node()=0
> CPU=32 phys_proc_id=1 node_id=4 cpu_to_node()=0
> CPU=40 phys_proc_id=1 node_id=5 cpu_to_node()=0
> CPU=48 phys_proc_id=1 node_id=6 cpu_to_node()=0
> CPU=56 phys_proc_id=1 node_id=7 cpu_to_node()=0

Ok, node_id is the DF instance number in this case.

> Yeah, I think example 4b works here. The mismatch though is with
> phys_proc_id and package on AMD systems. You can see above that
> phys_proc_id gives a socket number, and the AMD NodeId gives a package
> number.

Ok, now looka here:

"  - cpuinfo_x86.logical_proc_id:

    The logical ID of the package. As we do not trust BIOSes to enumerate the
    packages in a consistent way, we introduced the concept of logical package
    ID so we can sanely calculate the number of maximum possible packages in
    the system and have the packages enumerated linearly."

Doesn't that sound like exactly what you need?

Because that DF ID *is* practically the package ID as there's 1:1
mapping between DF and a package, as you say above.

Right?

Now, it says

[    7.670791] smpboot: Max logical packages: 2

on my Rome box but what you want sounds very much like the logical
package ID and if we define that on AMD to be that and document it this
way, I guess that should work too, provided there are no caveats like
sched is using this info for proper task placement and so on. That would
need code audit, of course...

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ