lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 20 May 2022 16:33:09 +0100
From:   Sudeep Holla <sudeep.holla@....com>
To:     Ionela Voinescu <ionela.voinescu@....com>
Cc:     Atish Patra <atishp@...shpatra.org>, linux-kernel@...r.kernel.org,
        Atish Patra <atishp@...osinc.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Sudeep Holla <sudeep.holla@....com>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Qing Wang <wangqing@...o.com>,
        linux-arm-kernel@...ts.infradead.org,
        linux-riscv@...ts.infradead.org, Rob Herring <robh+dt@...nel.org>
Subject: Re: [PATCH v2 0/8] arch_topology: Updates to add socket support and
 fix cluster ids

On Thu, May 19, 2022 at 05:32:49PM +0100, Ionela Voinescu wrote:
> Hi Sudeep,
> 
> On Wednesday 18 May 2022 at 10:33:17 (+0100), Sudeep Holla wrote:
> > Hi All,
> > 
> > This series intends to fix some discrepancies we have in the CPU topology
> > parsing from the device tree /cpu-map node. Also this diverges from the
> > behaviour on a ACPI enabled platform. The expectation is that both DT
> > and ACPI enabled systems must present consistent view of the CPU topology.
> > 
> > Currently we assign generated cluster count as the physical package identifier
> > for each CPU which is wrong. The device tree bindings for CPU topology supports
> > sockets to infer the socket or physical package identifier for a given CPU.
> > Also we don't check if all the cores/threads belong to the same cluster before
> > updating their sibling masks which is fine as we don't set the cluster id yet.
> > 
> > These changes also assigns the cluster identifier as parsed from the device tree
> > cluster nodes within /cpu-map without support for nesting of the clusters.
> > Finally, it also add support for socket nodes in /cpu-map. With this the
> > parsing of exact same information from ACPI PPTT and /cpu-map DT node
> > aligns well.
> > 
> > The only exception is that the last level cache id information can be
> > inferred from the same ACPI PPTT while we need to parse CPU cache nodes
> > in the device tree.
> > 
> > P.S: I have not cc-ed Greg and Rafael so that all the users of arch_topology
> > agree with the changes first before we include them.
> > 
> > v1[1]->v2:
> > 	- Updated ID validity check include all non-negative value
> > 	- Added support to get the device node for the CPU's last level cache
> > 	- Added support to build llc_sibling on DT platforms
> > 
> > [1] https://lore.kernel.org/lkml/20220513095559.1034633-1-sudeep.holla@arm.com
> > 
> > Sudeep Holla (8):
> >   arch_topology: Don't set cluster identifier as physical package identifier
> >   arch_topology: Set thread sibling cpumask only within the cluster
> >   arch_topology: Set cluster identifier in each core/thread from /cpu-map
> >   arch_topology: Add support for parsing sockets in /cpu-map
> >   arch_topology: Check for non-negative value rather than -1 for IDs validity
> >   arch_topology: Avoid parsing through all the CPUs once a outlier CPU is found
> >   of: base: add support to get the device node for the CPU's last level cache
> >   arch_topology: Add support to build llc_sibling on DT platforms
> > 
> 
> Just a recommendation for patch-set structure: it would be best to have
> the following sequence to maintain the same scheduler topology and
> behaviour when partially applying the set (currently testing this on Juno,
> but should be the case for other platforms as well):
> 
> 2/8 arch_topology: Set thread sibling cpumask only within the cluster
> 5/8 arch_topology: Check for non-negative value rather than -1 for IDs validity
> 6/8 arch_topology: Avoid parsing through all the CPUs once a outlier CPU is found
> 
> --> these are only preparation/cleanup patches and don't affect current
> functionality
>

Agreed. It was in my TODO list but I just to post this v2 for some reason.
I knew the patches were more in the order of my thoughts on what all needs
to be done and the order I added them. I knew it would result in issue
with kernel bisection.


> 7/8 of: base: add support to get the device node for the CPU's last level cache
> 8/8 arch_topology: Add support to build llc_sibling on DT platforms
>
> --> these will populate llc siblings but this list will be equal to
> core siblings (based on package_id) so nothing changes in the scheduler
> topology. Even if CONFIG_SCHED_CLUSTER=y, we still have cluster_id=-1 so
> nothing will change in that case either, for the patches so far.
>

Correct, I had worked out this much detail.

> 1/8 arch_topology: Don't set cluster identifier as physical package identifier
> 
> --> 1/8 is the trouble maker if it's the first patch as it will result
> in having all CPUs in core_siblings so the topology will be flattened to
> just an MC level for a typical b.L system like Juno. But if you add it after
> all of the above patches, the llc_siblings will contribute to create the same
> MC and DIE levels we expect.
> 
> 3/8 arch_topology: Set cluster identifier in each core/thread from /cpu-map
> 4/8 arch_topology: Add support for parsing sockets in /cpu-map
> 
> --> Here 3/8 will start creating complications when having clusters in
> DT and we have CONFIG_SCHED_CLUSTER=y. But I'll detail this in a reply
> to that patch. For CONFIG_SCHED_CLUSTER=n the topology and scheduler
> behaviour should be the same as before this set.
>

Thanks for the ordering of the last 3. I wanted to work out, but you need
to have done more work than me on that and thanks for saving my time. Much
appreciated.

> Hope it helps,

Yes definitely, thanks again.

-- 
Regards,
Sudeep

Powered by blists - more mailing lists