lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 25 Mar 2022 13:06:28 +0100
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     王擎 <wangqing@...o.com>,
        Russell King <linux@...linux.org.uk>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Paul Walmsley <paul.walmsley@...ive.com>,
        Palmer Dabbelt <palmer@...belt.com>,
        Albert Ou <aou@...s.berkeley.edu>,
        Sudeep Holla <sudeep.holla@....com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        "Rafael J. Wysocki" <rafael@...nel.org>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>
Subject: Re: [PATCH] sched: dynamic config sd_flags if described in DT

On 23/03/2022 07:45, 王擎 wrote:

[...]

>> Now, if you want to move ShPR from MC to DIE then a custom topology
>> table should do it, i.e. you don't have to change any generic task
>> scheduler code.
>>
>> static inline int cpu_cpu_flags(void)
>> {
>>        return SD_SHARE_PKG_RESOURCES;
>> }
>>
>> static struct sched_domain_topology_level custom_topology[] = {
>> #ifdef CONFIG_SCHED_SMT
>>         { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
>> #endif
>>
>> #ifdef CONFIG_SCHED_CLUSTER
>>         { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
>> #endif
>>
>> #ifdef CONFIG_SCHED_MC
>>         { cpu_coregroup_mask, SD_INIT_NAME(MC) },
>>                             ^^^^
>> #endif
>>         { cpu_cpu_mask, cpu_cpu_flags, SD_INIT_NAME(DIE) },
>>                         ^^^^^^^^^^^^^
>>         { NULL, },
>> };
>>
>> set_sched_topology(custom_topology);
> 
> However, due to the limitation of GKI, we cannot change the sd topology
> by ourselves. But we can configure CPU and cache topology through DT.

IMHO, mainline can't do anything here. You should talk to your Android
platform provider in this case. Android concepts like Generic Kernel
Image (GKI) are normally not discussed here.

>From mainline perspective we're OK with scheduling such a system flat,
e.g. only with a single MC SD [CPU0..CPU7] for each CPU.
It could be that the Phantom SD is still needed for additional
proprietary or Android add-ons though?

In case you would remove `clusterX` from your DT cpu-map (Phantom SD
information, i.e. the reason for why you have e.g. for CPU0: `MC (ShPR)
[CPU0..CPU3] and DIE [CPU0..CPU7]`) , you should see the natural
topology: only `MC (ShPR) [CPU0..CPU7]`.

> So why not get the ShPR from DT first? If not configured, use the default.

I'm not convinced that mainline will accept a change which is necessary
for a out-of-tree tweak (Phantom SD).

>>> *CLS[0-1][2-3](SD_SHARE_PKG_RESOURCES)
>>
>> But why do you want to have yet another SD underneath MC for CPU0-CPU3?
>> sd_llc is assigned to the highest ShPR SD, which would be DIE.
> 
> We want do something from the shared L2 cache(for complex, like walt), 
> you can ignore it here and talk about it when we done.

I assume you refer to the proprietary load-tracking mechanism `Window
Assisted Load Tracking` (WALT) here? It's also not in mainline.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ