[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <001d01d6cd96$601c8a80$20559f80$@net>
Date: Tue, 8 Dec 2020 11:14:36 -0800
From: "Doug Smythies" <dsmythies@...us.net>
To: "'Rafael J. Wysocki'" <rafael@...nel.org>,
"'Giovanni Gherdovich'" <ggherdovich@...e.com>
Cc: "'Rafael J. Wysocki'" <rjw@...ysocki.net>,
"'Linux PM'" <linux-pm@...r.kernel.org>,
"'LKML'" <linux-kernel@...r.kernel.org>,
"'Viresh Kumar'" <viresh.kumar@...aro.org>,
"'Srinivas Pandruvada'" <srinivas.pandruvada@...ux.intel.com>,
"'Peter Zijlstra'" <peterz@...radead.org>
Subject: RE: [PATCH v1 0/4] cpufreq: Allow drivers to receive more information from the governor
On 2020.12.08 09:14 Rafael J. Wysocki wrote:
> On Tue, Dec 8, 2020 at 5:31 PM Giovanni Gherdovich <ggherdovich@...e.com> wrote:
>> On Mon, 2020-12-07 at 17:25 +0100, Rafael J. Wysocki wrote:
>>> Hi,
>>>
>>> This is based on the RFC posted a few days ago:
>>>
>>> https://lore.kernel.org/linux-pm/1817571.2o5Kk4Ohv2@kreacher/
>>>
>>> The majority of the original cover letter still applies, so let me quote it
>>> here:
>>> [...]
>>
>> Hello Rafael,
>>
>> I'd like to test this patch, as I have concerns on how it performs against the
>> current intel_pstate passive + HWP + schedutil (which leaves HWP.REQ.DESIRED
>> untouched).
>
>The performance will be worse in some cases, which is to be expected,
Agreed. More further below.
>but the point here is to counter the problem that in some cases the
>frequency is excessive now.
Agreed.
I retested with this new version with my load sweep test,
and my results were the pretty similar to previously reported, [1].
If anything, there might be a perceptible difference between the RFC
version and this version as a function of work/sleep frequency.
73 Hertz: RFC was 0.8% less power
113 Hertz: RFC was 1.5% less power
211 Hertz: RFC was 0.8% less power
347 Hertz: RFC was 1.2% more power
401 Hertz: RFC was 1.6% more power
Of course, let us not lose site of the original gains that were in the 13 to 17% range.
Now, for the "worse in some cases, which is to be expected" part:
At least on my system, it is most evident for some of the pipe type tests,
where the schedutil governor has never really known what to do. This patch
set seems to add enough of a downward bias that this version of the schedutil
governor now behaves much like the other versions
Here is an example (no forced CPU affinity, 2 pipes):
Legend:
hwp: Kernel 5.10-rc6, HWP enabled; intel_cpufreq; schedutil (always)
rjw: Kernel 5.10-rc6 + this patch set, HWP enabled; intel_cpu-freq; schedutil
no-hwp: Kernel 5.10-rc6, HWP disabled; intel_cpu-freq; schedutil
acpi-cpufreq (or just acpi): Kernel 5.10-rc6, HWP disabled; acpi-cpufreq; schedutil
patch: Kernel 5.10-rc7 + the non RFC 4 patch version of this is patch set, HWP enabled; intel_cpu-freq; schedutil
hwp: 3.295 uSec/loop (steady); average power: 62.78 Watts (steady); freq: 4.60GHz (steady).
rjw: 3.6 uSec/loop (noisey) (9.3% worse); average power: 44.13 Watts (noisey); freq: ?.??GHz (noisey).
no-hwp: 3.35 uSec/loop (noisey); average power: 59.17 Watts (noisey); freq: ?.??GHz (much less noisey).
acpi-cpufreq: 3.4 uSec/loop (noisey); average power: 56.93 Watts (noisey); freq: ?.??GHz (less noisey).
patch: 3.6 uSec/loop (noisey) (9.3% worse); average power: 43.36 Watts (noisey); freq: ?.??GHz (noisey).
One could argue that this patch set might be driving the frequency down a little too hard in this case,
but this is one of those difficult rotating through the CPUs cases anyhow.
As a side note, all other governors (well, not powersave) pin the CPU frequency at max (4.6 GHz).
For my version of the "Alex" pipe test (a token is passed around and around via 6 named pipes,
with a bit of work to do per token stop)
I get (more power = better performance):
hwp: average power: 17.16 Watts (very noisey)
rjw: average power: 3.18 (noisey)
no-hwp: average power: 2.45 (less noisey)
acpi-cpufreq: average power: 2.46 (less noisey)
patch: average power: 2.47 (less noisey)
The "hwp" case is misleading. It is really bouncing around at about 0.2 hertz
and can't seem to make up its mind. If nothing else, all the other versions
are a little more deterministic in their response.
>
>> I'll get results within a week. Do you mind holding back the merge until then?
On my system that git "make test" is broken, so I can not do the high PIDs per second test that way.
My own PIDs per second test also has issues on this computer.
I am not sure when I'll get these type of tests working, but will try for within a week.
... Doug
References:
[1] https://marc.info/?l=linux-kernel&m=160692480907696&w=2
My tests results graphs (updated):
Note: I have to code the web site, or I get hammered by bots.
Note: it is .com only because it was less expensive than .org
73 Hertz (10 samples per second):
Double u double u double u dot smythies dot .com/~doug/linux/s18/hwp/k510-rc6/su73/
113 Hertz (10 samples per second):
Double u double u double u dot smythies dot .com/~doug/linux/s18/hwp/k510-rc6/su113/
211 Hertz (10 samples per second):
Double u double u double u dot smythies dot .com/~doug/linux/s18/hwp/k510-rc6/su211/
347 Hertz (10 samples per second):
Double u double u double u dot smythies dot .com/~doug/linux/s18/hwp/k510-rc6/su347/
401 Hertz (10 samples per second):
Double u double u double u dot smythies dot .com/~doug/linux/s18/hwp/k510-rc6/su401/
Note: The below graphs are mislabelled, because I made hacker use of a tool dedicated
to graphing, and HTML wrapping, the load sweep test. The test conditions are steady state
operation, no variable changes.
pipe-test (5 seconds per sample):
Double u double u double u dot smythies dot .com/~doug/linux/s18/hwp/k510-rc6/pipe/
Alex test (5 seconds per sample):
Double u double u double u dot smythies dot .com/~doug/linux/s18/hwp/k510-rc6/alex/
Powered by blists - more mailing lists