lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtDuUL_n-fCEmC1=Pn9ARGyEQttZ+F0=6L+L17+Z9fVZWQ@mail.gmail.com>
Date:	Fri, 3 Jul 2015 11:57:47 +0200
From:	Vincent Guittot <vincent.guittot@...aro.org>
To:	Michael Turquette <mturquette@...libre.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
	Morten Rasmussen <Morten.Rasmussen@....com>,
	Rik van Riel <riel@...hat.com>,
	Mike Galbraith <efault@....de>,
	Nicolas Pitre <nicolas.pitre@...aro.org>,
	Daniel Lezcano <daniel.lezcano@...aro.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Amit Kucheria <amit.kucheria@...aro.org>,
	Juri Lelli <juri.lelli@....com>,
	"rjw@...ysocki.net" <rjw@...ysocki.net>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	ashwin.chaugule@...aro.org, Alex Shi <alex.shi@...aro.org>,
	"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
	abelvesa@...il.com, pebolle@...cali.nl,
	Michael Turquette <mturquette@...aro.org>
Subject: Re: [PATCH v3 0/4] scheduler-driven cpu frequency selection

Hi Mike,

I have tried to evaluate the performance and the power consumption of
the various policy that we have in the kernel to manage the DVFS of
cpus including your sched-dvfs proposal. For this purpose, i have used
rt-app and a script that are available here:
https://git.linaro.org/power/rt-app.git/shortlog/refs/heads/master.
The script is in the directory:
doc/examples/cpufreq_governor_efficiency/
For evaluating the performance of DVFS policy, I use rt-app to run a
simple sequence that alternates run phase and sleep phase. The run
phase is done by looping a defined number of time on a function that
burns cycles (The number of loop is calibrated to be equivalent to a
specified duration at max frequency of the cpu). At the end of the
test, we know how many time is needed by each policy to run the same
pattern and we can compare policies. I have also done some power
consumption measurement so we can compare both the
performance/responsiveness and power consumption (the power figure
only reflects the power consumption of the A53 cluster).

The test has been done on cortex-A53 platofrm so i have ported the
frequency invariance patch [1/4] on arm64 (attached to the email for
reference)

The performance figures has been normalized; so 100% means as fast as
the performance governor that always use the max frequency and 0%
means as slow as the powersave governor that always uses the min
frequency. The power consumption figures have been normalized vs the
performance governor power consumption. For all governor i have used
the default parameters which means a sampling rate of 10ms and a
sampling_down_factor of 1 for ondemand and a sampling rate of 200ms
for conservative governor

              performance   powersave     ondemand      sched
conservative
run/sleep     perf  energy  perf  energy  perf  energy  perf  energy
perf  energy
50ms/1000ms   100%  100%      0%   34%     95%   62%     49%   35%     72%   33%
100ms/1000ms  100%  100%      0%   41%     98%   78%     69%   41%     92%   45%
200ms/1000ms  100%  100%      0%   51%     98%   78%     86%   62%     93%   62%
400ms/1000ms  100%  100%      0%   55%     99%   83%     93%   63%     99%   77%
1000ms/100ms  100%  100%      0%   74%     99%   100%    96%   97%     99%   99%

We can see that the responsiveness of the sched governor is not that
good for short running duration but this can probably can be explained
with the responsiveness of the load tracking which needs around 75ms
to reach 80% of max usage. there are several proposal to add the
blocked tasks in the usage of a cpu, this can may be improved the
responsiveness for some pattern.

Regards,
Vincent

On 27 June 2015 at 01:53, Michael Turquette <mturquette@...libre.com> wrote:
> This series addresses the comments from v2 and rearranges the code to
> separate the bits that are safe to merge from the bits that are not. In
> particular the simplified governor no longer implements any policy of
> its own. That code is migrated to fair.c and marked as RFC. The
> capacity-selection code in fair.c (patch #4) is a placeholder and can be
> replaced by something more sophisticated, and it illustrates the use of
> the new governor api's for anyone that wants to play around with
> crafting a new policy.
>
> Patch #1 is a re-post from Morten, as it is the only dependency these
> patches have on his EAS series. Please consider merging patches #2 and
> #3 if they do not appear too controversial. Without enabling the feature
> in Kconfig will be no impact on existing code.
>
> Michael Turquette (3):
>   cpufreq: introduce cpufreq_driver_might_sleep
>   sched: scheduler-driven cpu frequency selection
>   [RFC] sched: cfs: cpu frequency scaling policy
>
> Morten Rasmussen (1):
>   arm: Frequency invariant scheduler load-tracking support
>
>  arch/arm/include/asm/topology.h |   7 +
>  arch/arm/kernel/smp.c           |  53 ++++++-
>  arch/arm/kernel/topology.c      |  17 +++
>  drivers/cpufreq/Kconfig         |  24 ++++
>  drivers/cpufreq/cpufreq.c       |   6 +
>  include/linux/cpufreq.h         |  12 ++
>  kernel/sched/Makefile           |   1 +
>  kernel/sched/cpufreq_sched.c    | 308 ++++++++++++++++++++++++++++++++++++++++
>  kernel/sched/fair.c             |  41 ++++++
>  kernel/sched/sched.h            |   8 ++
>  10 files changed, 475 insertions(+), 2 deletions(-)
>  create mode 100644 kernel/sched/cpufreq_sched.c
>
> --
> 1.9.1
>

View attachment "0001-arm64-Frequency-invariant-scheduler-load-tracking-su.patch" of type "text/x-patch" (4623 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ