lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 19 Oct 2020 16:42:24 +0200
From:   Brice Goglin <Brice.Goglin@...ia.fr>
To:     Morten Rasmussen <morten.rasmussen@....com>,
        Jonathan Cameron <Jonathan.Cameron@...wei.com>
Cc:     Len Brown <len.brown@...el.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        x86@...nel.org, guohanjun@...wei.com, linux-kernel@...r.kernel.org,
        linuxarm@...wei.com, linux-acpi@...r.kernel.org,
        Jerome Glisse <jglisse@...hat.com>,
        Sudeep Holla <sudeep.holla@....com>,
        Will Deacon <will@...nel.org>, valentin.schneider@....com,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [RFC PATCH] topology: Represent clusters of CPUs within a die.


Le 19/10/2020 à 16:16, Morten Rasmussen a écrit :
>
>>> If there is a provable benefit of having interconnect grouping
>>> information, I think it would be better represented by a distance matrix
>>> like we have for NUMA.
>> There have been some discussions in various forums about how to
>> describe the complexity of interconnects well enough to actually be
>> useful.  Those have mostly floundered on the immense complexity of
>> designing such a description in a fashion any normal software would actually
>> use.  +cc Jerome who raised some of this in the kernel a while back.
> I agree that representing interconnect details is hard. I had hoped that
> a distance matrix would be better than nothing and more generic than
> inserting extra group masks.
>

The distance matrix is indeed more precise, but would it scale to
tens/hundreds of core? When ACPI HMAT latency/bandwidth was added, there
were concerns that exposing the full matrix would be an issue for the
kernel (that's why only local latency/bandwidth is exposed n sysfs).
This was only for NUMA nodes/targets/initiators, you would have
significantly more cores than that.

Brice


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ