[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <000701d3b950$e88e2bb0$b9aa8310$@net>
Date: Sun, 11 Mar 2018 08:52:02 -0700
From: "Doug Smythies" <dsmythies@...us.net>
To: "'Rafael J. Wysocki'" <rjw@...ysocki.net>
Cc: "'Rik van Riel'" <riel@...riel.com>,
"'Mike Galbraith'" <mgalbraith@...e.de>,
"'Thomas Gleixner'" <tglx@...utronix.de>,
"'Paul McKenney'" <paulmck@...ux.vnet.ibm.com>,
"'Thomas Ilsche'" <thomas.ilsche@...dresden.de>,
"'Frederic Weisbecker'" <fweisbec@...il.com>,
"'Linux PM'" <linux-pm@...r.kernel.org>,
"'Aubrey Li'" <aubrey.li@...ux.intel.com>,
"'LKML'" <linux-kernel@...r.kernel.org>,
"'Peter Zijlstra'" <peterz@...radead.org>,
"Doug Smythies" <dsmythies@...us.net>
Subject: RE: [RFC/RFT][PATCH v3 0/6] sched/cpuidle: Idle loop rework
On 2018.03.11 03:22 Rafael J. Wysocki wrote:
> On Sunday, March 11, 2018 8:43:02 AM CET Doug Smythies wrote:
>> On 2018.03.10 15:55 Rafael J. Wysocki wrote:
>>>On Saturday, March 10, 2018 5:07:36 PM CET Doug Smythies wrote:
>>>> On 2018.03.10 01:00 Rafael J. Wysocki wrote:
>>>
>> ... [snip] ...
>>
>>> The information that they often spend more time than a tick
>>>> period in state 0 in one go *is* relevant, though.
>>>
>>>
>>> That issue can be dealt with in a couple of ways and the patch below is a
>>> rather straightforward attempt to do that. The idea, basically, is to discard
>>> the result of governor prediction if the tick has been stopped alread and
>>> the predicted idle duration is within the tick range.
>>>
>>> Please try it on top of the v3 and tell me if you see an improvement.
>>
>> It seems pretty good so far.
>> See a new line added to the previous graph, "rjwv3plus".
>>
>> http://fast.smythies.com/rjwv3plus_100.png
>
> OK, cool!
>
> Below is a respin of the last patch which also prevents shallow states from
> being chosen due to interactivity_req when the tick is stopped.
>
> You may also add a poll_idle() fix I've just posted:
>
> https://patchwork.kernel.org/patch/10274595/
>
> on top of this. It makes quite a bit of a difference for me. :-)
I will add and test, but I already know from testing previous versions
of this patch, from Rik van Riel and myself, that the results will be
awesome.
>
>> I'll do another 100% load on one CPU test overnight, this time with
>> a trace.
The only thing I'll add from the 7 hour overnight test with trace is that
there were 0 occurrences of excessive times spent in idle states above 0.
The histograms show almost entirely those idle states being limited to
one tick time (I am using a 1000 Hz kernel). Exceptions:
Idle State: 3 CPU: 0: 1 occurrence of 1790 uSec (which is O.K. anyhow)
Idle State: 3 CPU: 6: 1 occurrence of 2372 uSec (which is O.K. anyhow)
... Doug
Powered by blists - more mailing lists