lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210222110023.GB4499@arm.com>
Date:   Mon, 22 Feb 2021 11:00:23 +0000
From:   Ionela Voinescu <ionela.voinescu@....com>
To:     Viresh Kumar <viresh.kumar@...aro.org>
Cc:     Rafael Wysocki <rjw@...ysocki.net>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
        linux-pm@...r.kernel.org, Sudeep Holla <sudeep.holla@....com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH V3 2/2] cpufreq: cppc: Add support for frequency
 invariance

Hey,

Some test results:

On Thursday 18 Feb 2021 at 16:35:38 (+0000), Ionela Voinescu wrote:
[..]
> > +static void __init cppc_freq_invariance_init(void)
> > +{
[..]
> > +
> > +		ret = cppc_get_perf_ctrs(i, &fb_ctrs);
> > +		if (!ret)
> > +			per_cpu(cppc_fi->prev_perf_fb_ctrs, i) = fb_ctrs;
> 

After fixing this one:
			cppc_fi->prev_perf_fb_ctrs = fb_ctrs;

I got the following:

Platform:

 - Juno R2 (CPUs [0-3] are littles, CPUs [4-5] are bigs)
    + PMU counters, used by CPPC through FFH
    + userspace/schedutil


  - Verifying that with userspace governor we see a correct change in
    scale factor:

	root@...ldroot:~# dmesg | grep FIE
	[    6.436770] AMU: CPUs[0-3]: AMU counters WON'T be used for FIE.
	[    6.436962] AMU: CPUs[4-5]: AMU counters WON'T be used for FIE.
	[    6.451510] CPPC:CPUs[0-5]: CPPC counters will be used for FIE.

	root@...ldroot:~# echo 600000 > policy4/scaling_setspeed
	[  353.939495] CPU4: Invariance(cppc) scale: 512.
	[  353.939497] CPU5: Invariance(cppc) scale: 512.

	root@...ldroot:~# echo 1200000 > policy4/scaling_setspeed
	[  372.683511] CPU5: Invariance(cppc) scale: 1024.
	[  372.683518] CPU4: Invariance(cppc) scale: 1024.

	root@...ldroot:~# echo 450000 > policy0/scaling_setspeed
	[  641.495513] CPU2: Invariance(cppc) scale: 485.
	[  641.495514] CPU1: Invariance(cppc) scale: 485.
	[  641.495517] CPU0: Invariance(cppc) scale: 485.
	[  641.495542] CPU3: Invariance(cppc) scale: 485.

	root@...ldroot:~# echo 950000 > policy0/scaling_setspeed
	[  852.015514] CPU2: Invariance(cppc) scale: 1024.
	[  852.015514] CPU1: Invariance(cppc) scale: 1024.
	[  852.015517] CPU0: Invariance(cppc) scale: 1024.
	[  852.015541] CPU3: Invariance(cppc) scale: 1024.

 - I ran some benchmarks as well (perf, hackbench, dhrystone) on the same
   platform, using the userspace governor at fixed frequency, to evaluate
   the impact of the work we do or don't do on the tick.

   ./perf bench sched pipe
   (10 iterations, higher is better, ops/s, comparisons with
   cpufreq-based FIE)

   cpufreq-based FIE    AMU-based FIE    CPPC-based FIE
   ----------------------------------------------------
   39498.8		40984.7		 38893.4
   std: 3.766%		std: 4.461%	 std: 0.575%
   			diff: 3.625%	 diff: -1.556%

   ./hackbench -l 1000
   (10 iterations, lower is better, seconds, comparison with
   cpufreq-based FIE)

   cpufreq-based FIE    AMU-based FIE    CPPC-based FIE
   ----------------------------------------------------
   6.4207		6.3386		 6.7841
   std: 7.298%		std: 2.252%	 std: 2.460%
   			diff: -1.295%	 diff: 5.356%

   This shows a small regression for the CPPC-based FIE, but within the
   standard deviation.

   I ran some dhrystone benchmarks (./dhrystone -t 2/34/5/6/ -l 5000) as
   well with schedutil governor to understand if an increase in accuracy
   with the AMU/CPPC counters makes a difference. Given the
   characteristics of the platform it's no surprise that the results
   were very similar between the three cases, so I won't bore you with
   the numbers.

Hope it helps,
Ionela.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ