lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200501055023.GA24574@in.ibm.com>
Date:   Fri, 1 May 2020 11:20:23 +0530
From:   Gautham R Shenoy <ego@...ux.vnet.ibm.com>
To:     Gautham R Shenoy <ego@...ux.vnet.ibm.com>
Cc:     Michael Ellerman <mpe@...erman.id.au>,
        Tyrel Datwyler <tyreld@...ux.ibm.com>,
        Nathan Lynch <nathanl@...ux.ibm.com>,
        Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
        Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
        "Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com>,
        linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 0/5] Track and expose idle PURR and SPURR ticks

On Thu, Apr 30, 2020 at 09:46:13AM +0530, Gautham R Shenoy wrote:
> Hello Michael,
> > >
> > > Michael, could you please consider this for 5.8 ?
> > 
> > Yes. Has it been tested on KVM at all?
> 
> No. I haven't tested this on KVM. Will do that today.


The results on Shared LPAR and KVM are as follows:
---------------------------------------------------

The lparstat results on a Shared LPAR are consistent with that
observed on a dedicated LPAR when at least one of the threads of the
core is active. When all the threads are idle, the lparstat shows
incorrect idle percentage. But this is perhaps due to the fact that
the Hypervisor puts a completely idle core in some power-saving state
with runlatch turned off due to which PURR counts on the threads of a
core do not add up to the elapsed timebase ticks. The results are in
section A) below.

lparstat is not supported on KVM. However, I performed some basic
sanity checks on purr, spurr, idle_purr, and idle_spurr sysfs files
that show up after this patch series. When CPUs are offlined, the
idle_purr and idle_spurr sysfs files no longer show up, just like purr
and spurr sysfs files. The values of the counters monotonically
increase, except when the CPU is busy, in which case the idle_purr and
idle_spurr counts are stagnant as expected.

However, I don't think the even the values of PURR or SPURR make much
sense on KVM guest, since the Linux Hypervisor doesn't set additional
registers such as RWMR, except on POWER8, where the KVM sets RWMR
corresponding to the number of online threads in a vCORE before
dispatching the vcore. I haven't been able to test it on a POWER8
guest yet. The results on POWER9 are in section B) below.


A ) Shared LPAR
======================

1. When all the threads of the core are running a CPU-Hog

# ./lparstat -E 1 5
System Configuration
type=Shared mode=Capped smt=8 lcpu=6 mem=10362752 kB cpus=10 ent=6.00 
---Actual---                 -Normalized-
%busy  %idle   Frequency     %busy  %idle
------ ------  ------------- ------ ------
100.00   0.00  2.90GHz[126%] 126.00   0.00
100.00   0.00  2.90GHz[126%] 126.00   0.00
100.00   0.00  2.90GHz[126%] 126.00   0.00
100.00   0.00  2.90GHz[126%] 126.00   0.00
100.01   0.00  2.90GHz[126%] 126.01   0.00

2. When 4 threads of a core are running CPU Hogs, with the remaining 4
threads idle.

# ./lparstat -E 1 5
System Configuration
type=Shared mode=Capped smt=8 lcpu=6 mem=10362752 kB cpus=10 ent=6.00 
---Actual---                 -Normalized-
%busy  %idle   Frequency     %busy  %idle
------ ------  ------------- ------ ------
 81.06  18.94  2.97GHz[129%] 104.56  24.44
 81.05  18.95  2.97GHz[129%] 104.56  24.44
 81.06  18.95  2.97GHz[129%] 104.56  24.44
 81.06  18.95  2.97GHz[129%] 104.56  24.44
 81.05  18.95  2.97GHz[129%] 104.56  24.45

3. When 2 threads of a core are running CPU Hogs, with the other 6
threads idle.

# ./lparstat -E 1 5
System Configuration
type=Shared mode=Capped smt=8 lcpu=6 mem=10362752 kB cpus=10 ent=6.00 
---Actual---                 -Normalized-
%busy  %idle   Frequency     %busy  %idle
------ ------  ------------- ------ ------
 65.21  34.79  3.13GHz[136%]  88.69  47.31
 65.20  34.81  3.13GHz[136%]  88.67  47.33
 64.25  35.76  3.13GHz[136%]  87.38  48.63
 63.68  36.31  3.13GHz[136%]  86.60  49.39
 63.55  36.45  3.13GHz[136%]  86.42  49.58
 

4. When a single thread of the core is running CPU Hog, remaining 7
threads are idle.
# ./lparstat -E 1 5
System Configuration
type=Shared mode=Capped smt=8 lcpu=6 mem=10362752 kB cpus=10 ent=6.00 
---Actual---                 -Normalized-
%busy  %idle   Frequency     %busy  %idle
------ ------  ------------- ------ ------
 31.80  68.20  3.20GHz[139%]  44.20  94.80
 31.80  68.20  3.20GHz[139%]  44.20  94.81
 31.80  68.20  3.20GHz[139%]  44.20  94.80
 31.80  68.21  3.20GHz[139%]  44.20  94.81
 31.79  68.21  3.20GHz[139%]  44.19  94.81

5. When the LPAR is idle:

# ./lparstat -E 1 5
System Configuration
type=Shared mode=Capped smt=8 lcpu=6 mem=10362752 kB cpus=10 ent=6.00 
---Actual---                 -Normalized-
%busy  %idle   Frequency     %busy  %idle
------ ------  ------------- ------ ------
  0.04   0.14  2.41GHz[105%]   0.04   0.15
  0.04   0.15  2.36GHz[102%]   0.04   0.15
  0.03   0.13  2.35GHz[102%]   0.03   0.14
  0.03   0.13  2.31GHz[100%]   0.03   0.13
  0.03   0.13  2.32GHz[101%]   0.03   0.14

In this case, the sum of the PURR values do not add up to the elapsed
TB. This is probably due to the Hypervisor putting the core into some
power-saving state with the runlatch turned off.

# ./purr_tb -t 8
Got threads_per_core = 8
CORE 0: 
		CPU 0 : Delta PURR : 85744 
		CPU 1 : Delta PURR : 113632 
		CPU 2 : Delta PURR : 78224 
		CPU 3 : Delta PURR : 68856 
		CPU 4 : Delta PURR : 78064 
		CPU 5 : Delta PURR : 60488 
		CPU 6 : Delta PURR : 77776 
		CPU 7 : Delta PURR : 59464 
Total Delta PURR : 622248 (Expected ~513156096)


B) KVM guest
==============================


vCPU idle:
-------------
Sampled every second when the KVM guest (1 socket, 2 cores, 4 threads,
vCPUs pinned) was idle. The value monotonically increase over time as
expected.


idle_purr:33128550
idle_spurr:3e3c775c
purr:d48181820
spurr:10295e8f28

idle_purr:331319f0
idle_spurr:3e3d56a4
purr:d481c4600
spurr:102964d3f0

idle_purr:331378c0
idle_spurr:3e3de58c
purr:d481faea0
spurr:102969f118

idle_purr:3313daa0
idle_spurr:3e3e77a4
purr:d4822c750
spurr:10296e9538

idle_purr:33143ab0
idle_spurr:3e3f093c
purr:d482608c0
spurr:1029737808

vCPU busy
---------------
Sampled every second on the same KVM guest, when the vCPU was running
a cpu-hog. The values of purr and spurr monotonically increase. And
the values of idle_purr and idle_spurr are stagnant as expected.

idle_purr:3335fca0
idle_spurr:3e71a774
purr:d5dd6bca0
spurr:1049fca1f0

idle_purr:3335fca0
idle_spurr:3e71a774
purr:d7c6f1c50
spurr:1077e12d40

idle_purr:3335fca0
idle_spurr:3e71a774
purr:d9b078720
spurr:10a5c5cc08

idle_purr:3335fca0
idle_spurr:3e71a774
purr:db99ef1d0
spurr:10d3a8eac0

idle_purr:3335fca0
idle_spurr:3e71a774
purr:dd8365c20
spurr:11018c0908


However, I don't think the even the values of PURR or SPURR make any
sense on KVM guest, since the Linux Hypervisor doesn't set additional
registers such as RWMR, except on POWER8, where the KVM sets RWMR
corresponding to the number of online threads in a vCORE before
dispatching the vcore.

On a POWER9 KVM guest:

When it is idle:

# ./purr_tb -t 4
Got threads_per_core = 4
CORE 0: 
		CPU 0 : Delta PURR : 2371632 
		CPU 1 : Delta PURR : 5056 
		CPU 2 : Delta PURR : 8016 
		CPU 3 : Delta PURR : 12688 
Total Delta PURR : 2397392 (Expected ~514567680)


When all the threads are running CPU Hogs:
# ./purr_tb -t 4
Got threads_per_core = 4
CORE 0: 
		CPU 0 : Delta PURR : 510742304 
		CPU 1 : Delta PURR : 510747696 
		CPU 2 : Delta PURR : 510740208 
		CPU 3 : Delta PURR : 510735200 
Total Delta PURR : 2042965408 (Expected ~512289792)

> 
> 
> > 
> > cheers
> 
--
Thanks and Regards
gautham.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ