lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1607596739-32439-1-git-send-email-ego@linux.vnet.ibm.com>
Date:   Thu, 10 Dec 2020 16:08:54 +0530
From:   "Gautham R. Shenoy" <ego@...ux.vnet.ibm.com>
To:     Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
        Anton Blanchard <anton@...abs.org>,
        Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Michael Neuling <mikey@...ling.org>,
        Nicholas Piggin <npiggin@...il.com>,
        Nathan Lynch <nathanl@...ux.ibm.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Valentin Schneider <valentin.schneider@....com>
Cc:     linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
        "Gautham R. Shenoy" <ego@...ux.vnet.ibm.com>
Subject: [PATCH v3 0/5] Extend Parsing "ibm,thread-groups" for Shared-L2 information

From: "Gautham R. Shenoy" <ego@...ux.vnet.ibm.com>

Hi,

This is the v2 of the patchset to extend parsing of "ibm,thread-groups" property
to discover the Shared-L2 cache information.

The previous versions can be found here :

v2 : https://lore.kernel.org/linuxppc-dev/1607533700-5546-1-git-send-email-ego@linux.vnet.ibm.com/T/#m043ea15d3832658527fca94765202b9cbefd330d

v1 : https://lore.kernel.org/linuxppc-dev/1607057327-29822-1-git-send-email-ego@linux.vnet.ibm.com/T/#m0fabffa1ea1a2807b362f25c849bb19415216520


Changes form v2-->v3:
 * Fixed the build errors reported by the Kernel Test Robot for Patches 4 and 5.

Changes from v1-->v2:
Incorporate the review comments from Srikar and
fix a build error on !PPC64 configs reported by the kernel bot.

 * Split Patch 1 into three patches
   * First patch ensure that parse_thread_groups() is made generic to
     support more than one property.
   * Second patch renames cpu_l1_cache_map as
     thread_group_l1_cache_map for consistency. No functional impact.
   * The third patch makes init_thread_group_l1_cache_map()
     generic. No functional impact.

* Patch 2 (Now patch 4): Incorporates the review comments from Srikar simplifying
   the changes to update_mask_by_l2()

* Patch 3 (Now patch 5): Fix a build errors for 32-bit configs
   reported by the kernel build bot.

Description of the Patchset
===========================
The "ibm,thread-groups" device-tree property is an array that is used
to indicate if groups of threads within a core share certain
properties. It provides details of which property is being shared by
which groups of threads. This array can encode information about
multiple properties being shared by different thread-groups within the
core.

Example: Suppose,
"ibm,thread-groups" = [1,2,4,8,10,12,14,9,11,13,15,2,2,4,8,10,12,14,9,11,13,15]

This can be decomposed up into two consecutive arrays:

a) [1,2,4,8,10,12,14,9,11,13,15]
b) [2,2,4,8,10,12,14,9,11,13,15]

where in,

a) provides information of Property "1" being shared by "2" groups,
   each with "4" threads each. The "ibm,ppc-interrupt-server#s" of the
   first group is {8,10,12,14} and the "ibm,ppc-interrupt-server#s" of
   the second group is {9,11,13,15}. Property "1" is indicative of
   the thread in the group sharing L1 cache, translation cache and
   Instruction Data flow.

b) provides information of Property "2" being shared by "2" groups,
   each group with "4" threads. The "ibm,ppc-interrupt-server#s" of
   the first group is {8,10,12,14} and the
   "ibm,ppc-interrupt-server#s" of the second group is
   {9,11,13,15}. Property "2" indicates that the threads in each group
   share the L2-cache.
   
The existing code assumes that the "ibm,thread-groups" encodes
information about only one property. Hence even on platforms which
encode information about multiple properties being shared by the
corresponding groups of threads, the current code will only pick the
first one. (In the above example, it will only consider
[1,2,4,8,10,12,14,9,11,13,15] but not [2,2,4,8,10,12,14,9,11,13,15]).

Furthermore, currently on platforms where groups of threads share L2
cache, we incorrectly create an extra CACHE level sched-domain that
maps to all the threads of the core.

For example, if "ibm,thread-groups" is 
		 00000001 00000002 00000004 00000000
		 00000002 00000004 00000006 00000001
		 00000003 00000005 00000007 00000002
		 00000002 00000004 00000000 00000002
		 00000004 00000006 00000001 00000003
		 00000005 00000007

then, the sub-array
[00000002 00000002 00000004
 00000000 00000002 00000004 00000006
 00000001 00000003 00000005 00000007]
indicates that L2 (Property "2") is shared only between the threads of a single
group. There are "2" groups of threads where each group contains "4"
threads each. The groups being {0,2,4,6} and {1,3,5,7}.

However, the sched-domain hierarchy for CPUs 0,1 is
	CPU0 attaching sched-domain(s):
	domain-0: span=0,2,4,6 level=SMT
	domain-1: span=0-7 level=CACHE
	domain-2: span=0-15,24-39,48-55 level=MC
	domain-3: span=0-55 level=DIE

	CPU1 attaching sched-domain(s):
	domain-0: span=1,3,5,7 level=SMT
	domain-1: span=0-7 level=CACHE
	domain-2: span=0-15,24-39,48-55 level=MC
	domain-3: span=0-55 level=DIE

where the CACHE domain reports that L2 is shared across the entire
core which is incorrect on such platforms.

This patchset remedies these issues by extending the parsing support
for "ibm,thread-groups" to discover information about multiple
properties being shared by the corresponding groups of threads. In
particular we cano now detect if the groups of threads within a core
share the L2-cache. On such platforms, we populate the populating the
cpu_l2_cache_mask of every CPU to the core-siblings which share L2
with the CPU as specified in the by the "ibm,thread-groups" property
array.

With the patchset, the sched-domain hierarchy is correctly
reported. For eg for CPUs 0,1, with the patchset

     	CPU0 attaching sched-domain(s):
	domain-0: span=0,2,4,6 level=SMT
	domain-1: span=0-15,24-39,48-55 level=MC
	domain-2: span=0-55 level=DIE

	CPU1 attaching sched-domain(s):
	domain-0: span=1,3,5,7 level=SMT
	domain-1: span=0-15,24-39,48-55 level=MC
	domain-2: span=0-55 level=DIE

The CACHE domain with span=0,2,4,6 for CPU 0 (span=1,3,5,7 for CPU 1
resp.) gets degenerated into the SMT domain. Furthermore, the
last-level-cache domain gets correctly set to the SMT sched-domain.

Testing
==========

With the producer-consumer
testcase(https://github.com/gautshen/misc/tree/master/producer_consumer)
where in the producer thread performs writes to 4096 random locations,
and the consumer thread subsequently reads from those 4096 random
location. We measure the time taken by the consumer to finish the 4096
reads (called an iteration of the consumer). Thus lower the value,
better is the result.

The best case occurs when the producer and consumer are affined to the
same L2 cache domain (Eg: CPU0, CPU2). On the platform with the
thread-groups sharing L2,
|-----------------------------------------------|
| Without Patch                                 |
|-----------|-----------|-----------------------|
| Producer  | Consumer  | Avg time per Consumer |
| Affinity  | Affinity  | Iteration             |
|-----------|-----------|-----------------------|
|  CPU0     |  CPU2     |   235us               |
|-----------|-----------|-----------------------|
|Not affined|Not affined|   347us               |
|-----------------------------------------------|

We see that out-of-box, the average time per consumer iteration is
higher since the tasks can be placed anywhere within the core without
them being in the L2 domain.

|-----------------------------------------------|
| With Patch                                    |
|-----------|-----------|-----------------------|
| Producer  | Consumer  | Avg time per Consumer |
| Affinity  | Affinity  | Iteration             |
|-----------|-----------|-----------------------|
|  CPU0     |  CPU2     |   235us               |
|-----------|-----------|-----------------------|
|Not affined|Not affined|   236us               |
|-----------------------------------------------|

With the patch, since the L2 domain is correctly identified, the
scheduler does the right thing by co-locating the producer and
consumer on the same L2 domain, thereby yielding the out-of-box
performance matching the best case.

Finally, this patchset reports the correct shared_cpu_map/list in the
sysfs for L2 cache on such platforms. With the patchset for CPUs0, 1,
for L2 cache we see the correct shared_cpu_map/list

/sys/devices/system/cpu/cpu0/cache/index2/shared_cpu_list:0,2,4,6
/sys/devices/system/cpu/cpu0/cache/index2/shared_cpu_map:000000,00000055

/sys/devices/system/cpu/cpu1/cache/index2/shared_cpu_list:1,3,5,7
/sys/devices/system/cpu/cpu1/cache/index2/shared_cpu_map:000000,000000aa

The patchset has been tested on older platforms which encode only the
L1 sharing information via "ibm,thread-groups" and there is no
regression found.

Gautham R. Shenoy (5):
  powerpc/smp: Parse ibm,thread-groups with multiple properties
  powerpc/smp: Rename cpu_l1_cache_map as thread_group_l1_cache_map
  powerpc/smp: Rename init_thread_group_l1_cache_map() to make it
    generic
  powerpc/smp: Add support detecting thread-groups sharing L2 cache
  powerpc/cacheinfo: Print correct cache-sibling map/list for L2 cache

 arch/powerpc/include/asm/smp.h  |   6 +
 arch/powerpc/kernel/cacheinfo.c |  30 +++--
 arch/powerpc/kernel/smp.c       | 241 ++++++++++++++++++++++++++++------------
 3 files changed, 198 insertions(+), 79 deletions(-)

-- 
1.9.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ