lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201007183800.27415-1-srikar@linux.vnet.ibm.com>
Date:   Thu,  8 Oct 2020 00:07:49 +0530
From:   Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To:     Michael Ellerman <mpe@...erman.id.au>
Cc:     linuxppc-dev <linuxppc-dev@...ts.ozlabs.org>,
        Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Nicholas Piggin <npiggin@...il.com>,
        Anton Blanchard <anton@...abs.org>,
        "Oliver O'Halloran" <oohall@...il.com>,
        Nathan Lynch <nathanl@...ux.ibm.com>,
        Michael Neuling <mikey@...ling.org>,
        Gautham R Shenoy <ego@...ux.vnet.ibm.com>,
        Satheesh Rajendran <sathnaga@...ux.vnet.ibm.com>,
        Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Valentin Schneider <valentin.schneider@....com>,
        Qian Cai <cai@...hat.com>
Subject: [PATCH v3 00/11]  Optimization to improve CPU online/offline on Powerpc

Changelog v2->v3:
v1 link: https://lore.kernel.org/linuxppc-dev/20200921095653.9701-1-srikar@linux.vnet.ibm.com/t/#u
	Use GFP_ATOMIC instead of GFP_KERNEL since allocations need to
	atomic at the time of CPU HotPlug.
	Reported by Qian Cai <cai@...hat.com>
	Only changes in Patch 09 and Patch 11.

Changelog v1->v2:
v1 link: https://lore.kernel.org/linuxppc-dev/20200727075532.30058-1-srikar@linux.vnet.ibm.com/t/#u
	Added five more patches on top of Seven.
	Rebased to 19th Sept 2020 powerpc/next (based on v5.9-rc2)

Here are some optimizations and fixes to make CPU online/offline
faster and hence result in faster bootup.

Its based on top of my v5 coregroup support patchset.
https://lore.kernel.org/linuxppc-dev/20200810071834.92514-1-srikar@linux.vnet.ibm.com/t/#u

Anton reported that his 4096 cpu (1024 cores in a socket) was taking too
long to boot. He also analyzed that most of the time was being spent on
updating cpu_core_mask.

The first two patches should solve Anton's immediate problem.
On the unofficial patches, Anton reported that the boot time came from 30
mins to 6 seconds. (Basically a high core count in a single socket
configuration). Satheesh also reported similar numbers.

The rest are cleanups/optimizations.

Since cpu_core_mask is an exported symbol for a long duration, lets retain
as a snapshot of cpumask_of_node.

$ lscpu
Architecture:        ppc64le
Byte Order:          Little Endian
CPU(s):              1024
On-line CPU(s) list: 0-1023
Thread(s) per core:  8
Core(s) per socket:  8
Socket(s):           16
NUMA node(s):        16
Model:               2.0 (pvr 004d 0200)
Model name:          POWER8 (architected), altivec supported
Hypervisor vendor:   pHyp
Virtualization type: para
L1d cache:           64K
L1i cache:           32K
L2 cache:            512K
L3 cache:            8192K
NUMA node0 CPU(s):   0-63
NUMA node1 CPU(s):   64-127
NUMA node2 CPU(s):   128-191
NUMA node3 CPU(s):   192-255
NUMA node4 CPU(s):   256-319
NUMA node5 CPU(s):   320-383
NUMA node6 CPU(s):   384-447
NUMA node7 CPU(s):   448-511
NUMA node8 CPU(s):   512-575
NUMA node9 CPU(s):   576-639
NUMA node10 CPU(s):  640-703
NUMA node11 CPU(s):  704-767
NUMA node12 CPU(s):  768-831
NUMA node13 CPU(s):  832-895
NUMA node14 CPU(s):  896-959
NUMA node15 CPU(s):  960-1023

$ dmesg -k | grep -i -e Bringing -e Brought -e sysrq -e bug
With powerp/next
[    0.000000] printk: debug: ignoring loglevel setting.
[    0.354971] smp: Bringing up secondary CPUs ...
[  233.354676] smp: Brought up 16 nodes, 1024 CPUs
[  330.023073] sysrq: Changing Loglevel
[  330.023101] sysrq: Loglevel set to 9

With +patchset
[    0.000000] printk: debug: ignoring loglevel setting.
[    0.351703] smp: Bringing up secondary CPUs ...
[    4.059859] smp: Brought up 16 nodes, 1024 CPUs
[   98.309015] sysrq: Changing Loglevel
[   98.309044] sysrq: Loglevel set to 9

Observations:
CPU bringup time reduced to 4 seconds from 233 seconds on this 1024 CPU
system. This resulted in System boot up time reducing to 98 seconds from
330 seconds. The actual improvement would depend on your system topology.

Topology verification post patchset on a 2 node Power9 PowerVM LPAR

powerpc/next                                                        +patchset
------------                                                        ---------
$ lscpu
Architecture:        ppc64le                                        Architecture:        ppc64le
Byte Order:          Little Endian                                  Byte Order:          Little Endian
CPU(s):              128                                            CPU(s):              128
On-line CPU(s) list: 0-127                                          On-line CPU(s) list: 0-127
Thread(s) per core:  8                                              Thread(s) per core:  8
Core(s) per socket:  8                                              Core(s) per socket:  8
Socket(s):           2                                              Socket(s):           2
NUMA node(s):        2                                              NUMA node(s):        2
Model:               2.2 (pvr 004e 0202)                            Model:               2.2 (pvr 004e 0202)
Model name:          POWER9 (architected), altivec supported        Model name:          POWER9 (architected), altivec supported
Hypervisor vendor:   pHyp                                           Hypervisor vendor:   pHyp
Virtualization type: para                                           Virtualization type: para
L1d cache:           32K                                            L1d cache:           32K
L1i cache:           32K                                            L1i cache:           32K
L2 cache:            512K                                           L2 cache:            512K
L3 cache:            10240K                                         L3 cache:            10240K
NUMA node0 CPU(s):   0-63                                           NUMA node0 CPU(s):   0-63
NUMA node1 CPU(s):   64-127                                         NUMA node1 CPU(s):   64-127

$ tail -f /proc/cpuinfo
processor	: 127                                               processor	: 127
cpu		: POWER9 (architected), altivec supported           cpu		: POWER9 (architected), altivec supported
clock		: 3000.000000MHz                                    clock		: 3000.000000MHz
revision	: 2.2 (pvr 004e 0202)                               revision	: 2.2 (pvr 004e 0202)

timebase	: 512000000                                         timebase	: 512000000
platform	: pSeries                                           platform	: pSeries
model		: IBM,9008-22L                                      model		: IBM,9008-22L
machine		: CHRP IBM,9008-22L                                 machine		: CHRP IBM,9008-22L
MMU		: Radix                                             MMU		: Radix

$ grep . /proc/sys/kernel/sched_domain/cpu0/domain*/name
--------------------------------------------------------
/proc/sys/kernel/sched_domain/cpu0/domain0/name:SMT                 /proc/sys/kernel/sched_domain/cpu0/domain0/name:SMT
/proc/sys/kernel/sched_domain/cpu0/domain1/name:CACHE               /proc/sys/kernel/sched_domain/cpu0/domain1/name:CACHE
/proc/sys/kernel/sched_domain/cpu0/domain2/name:DIE                 /proc/sys/kernel/sched_domain/cpu0/domain2/name:DIE
/proc/sys/kernel/sched_domain/cpu0/domain3/name:NUMA                /proc/sys/kernel/sched_domain/cpu0/domain3/name:NUMA

$ grep . /proc/sys/kernel/sched_domain/cpu0/domain*/flags
---------------------------------------------------------
/proc/sys/kernel/sched_domain/cpu0/domain0/flags:2391               /proc/sys/kernel/sched_domain/cpu0/domain0/flags:2391
/proc/sys/kernel/sched_domain/cpu0/domain1/flags:2327               /proc/sys/kernel/sched_domain/cpu0/domain1/flags:2327
/proc/sys/kernel/sched_domain/cpu0/domain2/flags:2071               /proc/sys/kernel/sched_domain/cpu0/domain2/flags:2071
/proc/sys/kernel/sched_domain/cpu0/domain3/flags:12801              /proc/sys/kernel/sched_domain/cpu0/domain3/flags:12801

Post ppc64_cpu --smt=1
$ tail -f /proc/cpuinfo
processor	: 120                                               processor	: 120
cpu		: POWER9 (architected), altivec supported           cpu		: POWER9 (architected), altivec supported
clock		: 3000.000000MHz                                    clock		: 3000.000000MHz
revision	: 2.2 (pvr 004e 0202)                               revision	: 2.2 (pvr 004e 0202)

timebase	: 512000000                                         timebase	: 512000000
platform	: pSeries                                           platform	: pSeries
model		: IBM,9008-22L                                      model	: IBM,9008-22L
machine		: CHRP IBM,9008-22L                                 machine	: CHRP IBM,9008-22L
MMU		: Radix                                             MMU		: Radix

$ grep . /proc/sys/kernel/sched_domain/cpu0/domain*/name
--------------------------------------------------------
/proc/sys/kernel/sched_domain/cpu0/domain0/name:DIE                 /proc/sys/kernel/sched_domain/cpu0/domain0/name:DIE
/proc/sys/kernel/sched_domain/cpu0/domain1/name:NUMA                /proc/sys/kernel/sched_domain/cpu0/domain1/name:NUMA

$ grep . /proc/sys/kernel/sched_domain/cpu0/domain*/flags
---------------------------------------------------------
/proc/sys/kernel/sched_domain/cpu0/domain0/flags:2071               /proc/sys/kernel/sched_domain/cpu0/domain0/flags:2071
/proc/sys/kernel/sched_domain/cpu0/domain1/flags:12801              /proc/sys/kernel/sched_domain/cpu0/domain1/flags:12801

Performance impact post +patchset
---------------------------------
100 iterations of ebizzy
Units: Records/second : higher is better
-----------------------------------------
kernel        N    Min     Max     Median  Avg        Stddev
powerpc/next  100  753917  870520  819054  817636.56  22649.7
+patchset     100  746258  874984  816681  813876.74  26424.351


100 iterations of perf bench sched pipe -l 10000000 (aka Hackbench)
units: usec/ops: lesser is better
--------------------------------
kernel        N    Min        Max        Median     Avg        Stddev
powerpc/next  100  13.845834  14.569539  14.06263   14.086167  0.17512607
+patchset     100  13.637611  18.097744  13.862656  13.9257    0.43872453


schbench Latency percentiles (usec)
units: usec : lesser is better
-----------------------------------
powerpc/next      	+patchset
50.0000th: 48     	50.0000th: 49
75.0000th: 65     	75.0000th: 66
90.0000th: 77     	90.0000th: 79
95.0000th: 84     	95.0000th: 85
*99.0000th: 101   	*99.0000th: 99
99.5000th: 113    	99.5000th: 104
99.9000th: 159    	99.9000th: 129
min=0, max=15221  	min=0, max=7666

100 interations of ppc64_cpu --smt=1 / ppc64_cpu --smt=8
Units: seconds : lesser is better
---------------------------------
ppc64_cpu --smt=1
kernel        N    Min    Max    Median  Avg      Stddev
powerpc/next  100  13.39  17.55  14.71   14.7658  0.69184745
+patchset     100  13.3   16.27  14.33   14.4179  0.5427433

ppc64_cpu --smt=8
kernel        N    Min    Max    Median  Avg      Stddev
powerpc/next  100  21.65  26.17  23.71   23.7111  0.8589786
+patchset     100  21.88  25.79  23.16   23.2945  0.86394839


Observations:
Performance of ebizzy/ perf_sched_bench / schbench remain the
same with and without the patchset.

Cc: linuxppc-dev <linuxppc-dev@...ts.ozlabs.org>
Cc: LKML <linux-kernel@...r.kernel.org>
Cc: Michael Ellerman <mpe@...erman.id.au>
Cc: Nicholas Piggin <npiggin@...il.com>
Cc: Anton Blanchard <anton@...abs.org>
Cc: Oliver O'Halloran <oohall@...il.com>
Cc: Nathan Lynch <nathanl@...ux.ibm.com>
Cc: Michael Neuling <mikey@...ling.org>
Cc: Gautham R Shenoy <ego@...ux.vnet.ibm.com>
Cc: Satheesh Rajendran <sathnaga@...ux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Valentin Schneider <valentin.schneider@....com>
Cc: Qian Cai <cai@...hat.com>

Srikar Dronamraju (11):
  powerpc/topology: Update topology_core_cpumask
  powerpc/smp: Stop updating cpu_core_mask
  powerpc/smp: Remove get_physical_package_id
  powerpc/smp: Optimize remove_cpu_from_masks
  powerpc/smp: Limit CPUs traversed to within a node.
  powerpc/smp: Stop passing mask to update_mask_by_l2
  powerpc/smp: Depend on cpu_l1_cache_map when adding CPUs
  powerpc/smp: Check for duplicate topologies and consolidate
  powerpc/smp: Optimize update_mask_by_l2
  powerpc/smp: Move coregroup mask updation to a new function
  powerpc/smp: Optimize update_coregroup_mask

 arch/powerpc/include/asm/smp.h      |   5 -
 arch/powerpc/include/asm/topology.h |   7 +-
 arch/powerpc/kernel/smp.c           | 188 +++++++++++++++++++++++-------------
 3 files changed, 122 insertions(+), 78 deletions(-)

-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ