lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 2 Sep 2021 19:30:24 +0200
From:   Borislav Petkov <bp@...en8.de>
To:     Yazen Ghannam <yazen.ghannam@....com>
Cc:     Naveen Krishna Chatradhi <nchatrad@....com>,
        linux-edac@...r.kernel.org, x86@...nel.org,
        linux-kernel@...r.kernel.org, mchehab@...nel.org,
        Muralidhara M K <muralimk@....com>
Subject: Re: [PATCH v3 1/3] x86/amd_nb: Add support for northbridges on
 Aldebaran

On Wed, Sep 01, 2021 at 06:17:21PM +0000, Yazen Ghannam wrote:
> These devices aren't officially GPUs, since they don't have graphics/video
> capabilities. Can we come up with a new term for this class of devices? Maybe
> accelerators or something?
> 
> In any case, GPU is still used throughout documentation and code, so it's fair
> to just stick with "gpu".

Hmm, yeah, everybody is talking about special-purpose processing units
now, i.e., accelerators or whatever they call them. I guess this is the
new fancy thing since sliced bread.

Well, what are those PCI IDs going to represent? Devices which have RAS
capabilities on them?

We have this nomenclature called "uncore" in the perf subsystem for
counters which are not part of the CPU core or whatever. But there we
use that term on AMD already so that might cause confusion.

But I guess the type of those devices doesn't matter for amd_nb.c,
right?

All that thing cares for is having an array of northbridges, each with
the respective PCI devices and that's it. So for amd_nb.c I think that
differentiation doesn't matter... but keep reading...

> We use the Node ID to index into the amd_northbridge.nb array, e.g. in
> node_to_amd_nb().
> 
> We can get the Node ID of a GPU node when processing an MCA error as in Patch
> 2 of this set. The hardware is going to give us a value of 8 or more.
> 
> So, for example, if we set up the "nb" array like this for 1 CPU and 2 GPUs:
> [ID:Type] : [0: CPU], [8: GPU], [9: GPU]
>  
> Then I think we'll need some more processing at runtime to map, for example,
> an error from GPU Node 9 to NB array Index 2, etc.
> 
> Or we can manage this at init time like this:
> [0: CPU], [1: NULL], [2: NULL], [3: NULL], [4: NULL], [5: NULL], [6: NULL],
> [7, NULL], [8: GPU], [9: GPU]
> 
> And at runtime, the code which does Node ID to NB entry just works. This
> applies to node_to_amd_nb(), places where we loop over amd_nb_num(), etc.
> 
> What do you think?

Ok, looking at patch 2, it does:

	node_id = ((m->ipid >> 44) & 0xF);

So how ugly would it become if you do here:

	node_id = ((m->ipid >> 44) & 0xF);
	node_id -= accel_id_offset;

where that accel_id_offset is the thing you've read out from one of the
Data Fabric registers before?

This way, the gap between CPU IDs and accel IDs is gone and in the
software view, there is none.

Or are we reading other hardware registers which are aware of that gap
and we would have to remove it again to get the proper index? And if so,
and if it becomes real ugly, maybe we will have to bite the bullet and
do the gap in the array but that would be yucky...

Hmmm.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ