lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <SL2PR06MB30827A53DBA601F0FFFFA81BBD0F9@SL2PR06MB3082.apcprd06.prod.outlook.com>
Date:   Mon, 14 Mar 2022 02:13:06 +0000
From:   王擎 <wangqing@...o.com>
To:     Darren Hart <darren@...amperecomputing.com>
CC:     Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Sudeep Holla <sudeep.holla@....com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        "Rafael J. Wysocki" <rafael@...nel.org>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH V2] sched: topology: make cache topology separate from cpu
 topology


>> From: Wang Qing <wangqing@...o.com>
>> 
>> Some architectures(e.g. ARM64), caches are implemented like below:
>> SD(Level 1):          ************ DIE ************
>> SD(Level 0):          **** MC ****    **** MC *****
>> cluster:              **cluster 0**   **cluster 1**
>> cores:                0   1   2   3   4   5   6   7
v> cache(Level 1):       C   C   C   C   C   C   C   C
>> cache(Level 2):       **C**   **C**   **C**   **C**
>> cache(Level 3):       *******shared Level 3********
>> sd_llc_id(current):   0   0   0   0   4   4   4   4
>> sd_llc_id(should be): 0   0   2   2   4   4   6   6
>
>Should cluster 0 and 1 span the same cpu mask as the MCs? Based on how
>you describe the cache above, it seems like what you are looking for
>would be:
>
>(SD DIE level removed in favor of the same span MC)
>SD(Level 1):          ************ MC  ************
>SD(Level 0):          *CLS0*  *CLS1*  *CLS2*  *CLS3* (CONFIG_SCHED_CLUSTER)
>cores:                0   1   2   3   4   5   6   7
>cache(Level 1):       C   C   C   C   C   C   C   C
>cache(Level 2):       **C**   **C**   **C**   **C**
>cache(Level 3):       *******shared Level 3********
>
>Provided cpu_coregroup_mask and cpu_clustergroup_mask return the
>corresponding cpumasks, this should work with the default sched domain
>topology.
>
>It looks to me like the lack of nested cluster support in
>parse_cluster() in drivers/base/arch_topology.c is what needs to be
>updated to accomplish the above. With cpu_topology[cpu].cluster_sibling and
>core_sibling updated to reflect the topology you describe, the rest of
>the sched domains construction would work with the default sched domain
>topology.

Complex (core[0-1]) looks like a nested cluster, but is not exactly,.
They only share L2 cache. 
parse_cluster() only parses the CPU topology, and does not parse the cache
topology even if described.

>I'm not very familiar with DT, especially the cpu-map. Does your DT
>reflect the topology you want to build?

The DT looks like:
cpu-map {
	cluster0 {
		core0 {
			cpu = <&cpu0>;
		};
		core1 {
			cpu = <&cpu1>;
		};
		core2 {
			cpu = <&cpu2>;
		};
		core3 {
			cpu = <&cpu3>;
		};
		doe_dvfs_cl0: doe {
		};
	};

	cluster1 {
		core0 {
			cpu = <&cpu4>;
		};
		core1 {
			cpu = <&cpu5>;
		};
		core2 {
			cpu = <&cpu6>;
		};
		doe_dvfs_cl1: doe {
		};
	};
};

cpus {
		cpu0: cpu@100 {
			next-level-cache = <&L2_1>;
			L2_1: l2-cache {
 				compatible = "cache";
				next-level-cache = <&L3_1>;
 			};
			L3_1: l3-cache {
 				compatible = "cache";
 			};
		};

		cpu1: cpu@101 {
			next-level-cache = <&L2_1>;
		};

		cpu2: cpu@102 {
			next-level-cache = <&L2_2>;
			L2_2: l2-cache {
 				compatible = "cache";
				next-level-cache = <&L3_1>;
			};
		};

		cpu3: cpu@103 {
			next-level-cache = <&L2_2>;
		};

		cpu4: cpu@100 {
			next-level-cache = <&L2_3>;
			L2_3: l2-cache {
 				compatible = "cache";
				next-level-cache = <&L3_1>;
 			};
		};

		cpu5: cpu@101 {
			next-level-cache = <&L2_3>;
		};

		cpu6: cpu@102 {
			next-level-cache = <&L2_4>;
			L2_4: l2-cache {
 				compatible = "cache";
				next-level-cache = <&L3_1>;
 			};
		};

		cpu7: cpu@200 {
			next-level-cache = <&L2_4>;
		};
	};

Thanks,
Wang

>
>
>-- 
>Darren Hart
>Ampere Computing / OS and Kernel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ