[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJZ5v0jH6pXs07r9xvUwUDWz7A8DbDO-2UfMUTmMygO1PJ76=Q@mail.gmail.com>
Date: Mon, 5 Jul 2021 15:24:43 +0200
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Doug Smythies <dsmythies@...us.net>
Cc: "Rafael J. Wysocki" <rjw@...ysocki.net>,
LKML <linux-kernel@...r.kernel.org>,
Linux PM <linux-pm@...r.kernel.org>
Subject: Re: [PATCH v1 0/5] cpuidle: teo: Rework the idle state selection logic
On Sun, Jul 4, 2021 at 11:01 PM Doug Smythies <dsmythies@...us.net> wrote:
>
> Hi Rafael,
>
> On 2021.06.02 11:14 Rafael J. Wysocki wrote:
>
> > Hi All,
> >
> > This series of patches addresses some theoretical shortcoming in the
> > TEO (Timer Events Oriented) cpuidle governor by reworking its idle
> > state selection logic to some extent.
> >
> > Patches [1-2/5] are introductory cleanups and the substantial changes are
> > made in patches [3-4/5] (please refer to the changelogs of these two
> > patches for details). The last patch only deals with documentation.
> >
> > Even though this work is mostly based on theoretical considerations, it
> > shows a measurable reduction of the number of cases in which the
> shallowest
> > idle state is selected while it would be more beneficial to select a
> deeper
> > one or the deepest idle state is selected while it would be more
> beneficial to
> > select a shallower one, which should be a noticeable improvement.
>
> Do you have any test results to share? Or test methods that I can try?
> I have done a few tests, and generally don't notice much difference.
> Perhaps an increase in idle state 2 below (was to shallow) numbers.
> I am searching for some results that would offset the below:
>
> The difficulty I am having with this patch set is the additional overhead
> which becomes significant at the extremes, where idle state 0 is dominant.
> Throughout the history of teo, I have used multiple one core pipe-tests
> for this particular test. Some results:
>
> CPU: Intel(R) Core(TM) i5-10600K CPU @ 4.10GHz
> HWP: disabled
> CPU frequency scaling driver: intel_pstate, active, powersave
> Pipe-tests are run forever, printing average loop time for the
> Last 2.5 million loops. 1021 of those are again averaged.
> Total = 2.5525e9 loops
> The power and idle data is sampled for 100 minutes.
>
> Note 1: other tests were also done and also with passive,
> schedutil, but it isn't relevant for this test because the
> CPU frequency stays pinned at maximum.
>
> Note 2: I use TCC offset for thermal throttling, but I disabled it
> for these tests, because the temperature needed to go higher
> than my normal throttling point.
>
> Idle configuration 1: As a COMETLAKE processor, with 4 idle states.
> Kernel 5.13-RC4.
>
> Before patch set average:
> 2.8014 uSec/loop
> 113.9 watts
> Idle state 0 residency: 9.450%
> Idle state 0 entries per minute: 256,812,896.6
>
> After patch set average:
> 2.8264 uSec/loop, 0.89% slower
> 114.0 watts
> Idle state 0 residency: 8.677%
> Idle state 0 entries per minute: 254,560,049.9
>
> Menu governor:
> 2.8051 uSec/loop, 0.13% slower
> 113.9 watts
> Idle state 0 residency: 8.437%
> Idle state 0 entries per minute: 256,436,417.2
>
> O.K., perhaps not so bad, but also not many idle states.
>
> Idle configuration 2: As a SKYLAKE processor, with 9 idle states.
> i.e.:
> /drivers/idle/intel_idle.c
> static const struct x86_cpu_id intel_idle_ids[] __initconst
> ...
> X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_X, &idle_cpu_skx),
> + X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, &idle_cpu_skl),
>
> Purpose: To demonstrate increasing overhead as a function of number
> of idle states.
> Kernel 5.13.
>
> Before patch set average:
> 2.8394 uSec/loop
> 114.2 watts
> Idle state 0 residency: 7.212%
> Idle state 0 entries per minute: 253,391,954.3
>
> After patch set average:
> 2.9103 uSec/loop, 2.5% slower
> 114.4 watts, 0.18% more
> Idle state 0 residency: 6.152%, 14.7% less.
> Idle state 0 entries per minute: 244,024,752.1
>
> Menu governor:
> 2.8141 uSec/loop, 0.89% faster
> 113.9 watts, 0.26% less
> Idle state 0 residency: 7.167%, 0.6% less
> Idle state 0 entries per minute: 255,650,610.7
>
> Another potentially interesting test was the ebizzy test:
> Records per second, averaged over many tests, varying
> threads and intervals:
>
> passive, schedutil:
> Before: 6771.977
> After: 5502.643, -18.7%
> Menu: 10728.89, +58.4%
>
> Active, powersave:
> Before: 8361.82
> After: 8463.31, +1.2%
> Menu: 8225.58, -1.6%
>
> I think it has more to do with CPU scaling governors
> than this patch set, so:
>
> performance:
> Before: 12137.33
> After: 12083.26, -0.4%
> Menu: 11983.73, -1.3%
>
> These and other test results available here:
> (encoded to prevent a barrage of bots)
>
> double u double u double u dot smythies dot com
> /~doug/linux/idle/teo-2021-06/
>
> ... a day later ...
>
> I might have an answer to my own question.
> By switching to cross core pipe-tests, and only loading down one
> CPU per core, I was able to get a lot more activity in other idle states.
> The test runs for 100 minutes, and the results change with time, but
> I'll leave that investigation for another day (there is no throttling):
>
> 1st 50 tests:
> Before: 3.888 uSec/loop
> After: 3.764 uSec/loop
> Menu: 3.464 uSec/loop
>
> Tests 50 to 100:
> Before: 4.329 uSec/loop
> After: 3.919 uSec/loop
> Menu: 3.514 uSec/loop
>
> Tests 200 to 250:
> Before: 5.089 uSec/loop
> After: 4.364 uSec/loop
> Menu: 4.619 uSec/loop
>
> Tests 280 to 330:
> Before: 5.142 uSec/loop
> After: 4.464 uSec/loop
> Menu: 4.619 uSec/loop
>
> Notice that the "after" this patch set is applied eventually does
> better than using the menu governor. Its processor package power
> always remains less, than the menu governor.
That's good news, thanks!
> The results can be viewed graphically at the above link, but the
> most dramatic results are:
>
> Idle state 3 above % goes from 70% to 5%.
> Idle state 2 below % goes from 13% to less than 1%.
This also looks promising.
Thank you for all of the results!
Powered by blists - more mailing lists