lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 10 May 2024 15:49:46 +0100
From: Luis Machado <luis.machado@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...hat.com, juri.lelli@...hat.com, vincent.guittot@...aro.org,
 dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
 mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
 linux-kernel@...r.kernel.org, kprateek.nayak@....com,
 wuyun.abel@...edance.com, tglx@...utronix.de, efault@....de, nd
 <nd@....com>, John Stultz <jstultz@...gle.com>, Hongyan.Xia2@....com
Subject: Re: [RFC][PATCH 08/10] sched/fair: Implement delayed dequeue

On 5/2/24 11:26, Luis Machado wrote:
> On 4/29/24 15:33, Luis Machado wrote:
>> Hi Peter,
>>
>> On 4/26/24 10:32, Peter Zijlstra wrote:
>>> On Thu, Apr 25, 2024 at 01:49:49PM +0200, Peter Zijlstra wrote:
>>>> On Thu, Apr 25, 2024 at 12:42:20PM +0200, Peter Zijlstra wrote:
>>>>
>>>>>> I wonder if the delayed dequeue logic is having an unwanted effect on the calculation of
>>>>>> utilization/load of the runqueue and, as a consequence, we're scheduling things to run on
>>>>>> higher OPP's in the big cores, leading to poor decisions for energy efficiency.
>>>>>
>>>>> Notably util_est_update() gets delayed. Given we don't actually do an
>>>>> enqueue when a delayed task gets woken, it didn't seem to make sense to
>>>>> update that sooner.
>>>>
>>>> The PELT runnable values will be inflated because of delayed dequeue.
>>>> cpu_util() uses those in the @boost case, and as such this can indeed
>>>> affect things.
>>>>
>>>> This can also slightly affect the cgroup case, but since the delay goes
>>>> away as contention goes away, and the cgroup case must already assume
>>>> worst case overlap, this seems limited.
>>>>
>>>> /me goes ponder things moar.
>>>
>>> First order approximation of a fix would be something like the totally
>>> untested below I suppose...
>>
>> I gave this a try on the Pixel 6, and I noticed some improvement (see below), but not
>> enough to bring it back to the original levels.
>>
>> (1) m6.6-stock - Basic EEVDF with wakeup preemption fix (baseline)
>> (2) m6.6-eevdf-complete: m6.6-stock plus this series.
>> (3) m6.6-eevdf-complete-no-delay-dequeue: (2) + NO_DELAY_DEQUEUE
>> (4) m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero: (2) + NO_DELAY_DEQUEUE + NO_DELAY_ZERO
>> (5) m6.6-eevdf-complete-no-delay-zero: (2) + NO_DELAY_ZERO
>> (6) m6.6-eevdf-complete-pelt-fix: (2) + the proposed load_avg update patch.
>>
>> I included (3), (4) and (5) to exercise the impact of disabling the individual
>> scheduler features.
>>
>>
>> Energy use.
>>
>> +------------+------------------------------------------------------+-----------+
>> |  cluster   |                         tag                          | perc_diff |
>> +------------+------------------------------------------------------+-----------+
>> |    CPU     |                   m6.6-stock                         |   0.0%    |
>> |  CPU-Big   |                   m6.6-stock                         |   0.0%    |
>> | CPU-Little |                   m6.6-stock                         |   0.0%    |
>> |  CPU-Mid   |                   m6.6-stock                         |   0.0%    |
>> |    GPU     |                   m6.6-stock                         |   0.0%    |
>> |   Total    |                   m6.6-stock                         |   0.0%    |
>> |    CPU     |                m6.6-eevdf-complete                   |  114.51%  |
>> |  CPU-Big   |                m6.6-eevdf-complete                   |  90.75%   |
>> | CPU-Little |                m6.6-eevdf-complete                   |  98.74%   |
>> |  CPU-Mid   |                m6.6-eevdf-complete                   |  213.9%   |
>> |    GPU     |                m6.6-eevdf-complete                   |  -7.04%   |
>> |   Total    |                m6.6-eevdf-complete                   |  100.92%  |
>> |    CPU     |        m6.6-eevdf-complete-no-delay-dequeue          |  117.77%  |
>> |  CPU-Big   |        m6.6-eevdf-complete-no-delay-dequeue          |  113.79%  |
>> | CPU-Little |        m6.6-eevdf-complete-no-delay-dequeue          |  97.47%   |
>> |  CPU-Mid   |        m6.6-eevdf-complete-no-delay-dequeue          |  189.0%   |
>> |    GPU     |        m6.6-eevdf-complete-no-delay-dequeue          |  -6.74%   |
>> |   Total    |        m6.6-eevdf-complete-no-delay-dequeue          |  103.84%  |
>> |    CPU     | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero   |  120.45%  |
>> |  CPU-Big   | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero   |  113.65%  |
>> | CPU-Little | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero   |  99.04%   |
>> |  CPU-Mid   | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero   |  201.14%  |
>> |    GPU     | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero   |  -5.37%   |
>> |   Total    | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero   |  106.38%  |
>> |    CPU     |         m6.6-eevdf-complete-no-delay-zero            |  119.05%  |
>> |  CPU-Big   |         m6.6-eevdf-complete-no-delay-zero            |  107.55%  |
>> | CPU-Little |         m6.6-eevdf-complete-no-delay-zero            |  98.66%   |
>> |  CPU-Mid   |         m6.6-eevdf-complete-no-delay-zero            |  206.58%  |
>> |    GPU     |         m6.6-eevdf-complete-no-delay-zero            |  -5.25%   |
>> |   Total    |         m6.6-eevdf-complete-no-delay-zero            |  105.14%  |
>> |    CPU     |            m6.6-eevdf-complete-pelt-fix              |  105.56%  |
>> |  CPU-Big   |            m6.6-eevdf-complete-pelt-fix              |  100.45%  |
>> | CPU-Little |            m6.6-eevdf-complete-pelt-fix              |   94.4%   |
>> |  CPU-Mid   |            m6.6-eevdf-complete-pelt-fix              |  150.94%  |
>> |    GPU     |            m6.6-eevdf-complete-pelt-fix              |  -3.96%   |
>> |   Total    |            m6.6-eevdf-complete-pelt-fix              |  93.31%   |
>> +------------+------------------------------------------------------+-----------+
>>
>> Utilization and load levels.
>>
>> +---------+------------------------------------------------------+----------+-----------+
>> | cluster |                         tag                          | variable | perc_diff |
>> +---------+------------------------------------------------------+----------+-----------+
>> | little  |                   m6.6-stock                         |   load   |   0.0%    |
>> | little  |                   m6.6-stock                         |   util   |   0.0%    |
>> | little  |                m6.6-eevdf-complete                   |   load   |  29.56%   |
>> | little  |                m6.6-eevdf-complete                   |   util   |   55.4%   |
>> | little  |        m6.6-eevdf-complete-no-delay-dequeue          |   load   |  42.89%   |
>> | little  |        m6.6-eevdf-complete-no-delay-dequeue          |   util   |  69.47%   |
>> | little  | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero   |   load   |  51.05%   |
>> | little  | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero   |   util   |  76.55%   |
>> | little  |         m6.6-eevdf-complete-no-delay-zero            |   load   |  34.51%   |
>> | little  |         m6.6-eevdf-complete-no-delay-zero            |   util   |  72.53%   |
>> | little  |            m6.6-eevdf-complete-pelt-fix              |   load   |  29.96%   |
>> | little  |            m6.6-eevdf-complete-pelt-fix              |   util   |  59.82%   |
>> |   mid   |                   m6.6-stock                         |   load   |   0.0%    |
>> |   mid   |                   m6.6-stock                         |   util   |   0.0%    |
>> |   mid   |                m6.6-eevdf-complete                   |   load   |  29.37%   |
>> |   mid   |                m6.6-eevdf-complete                   |   util   |  75.22%   |
>> |   mid   |        m6.6-eevdf-complete-no-delay-dequeue          |   load   |   36.4%   |
>> |   mid   |        m6.6-eevdf-complete-no-delay-dequeue          |   util   |  80.28%   |
>> |   mid   | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero   |   load   |  30.35%   |
>> |   mid   | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero   |   util   |   90.2%   |
>> |   mid   |         m6.6-eevdf-complete-no-delay-zero            |   load   |  37.83%   |
>> |   mid   |         m6.6-eevdf-complete-no-delay-zero            |   util   |  93.79%   |
>> |   mid   |            m6.6-eevdf-complete-pelt-fix              |   load   |  33.57%   |
>> |   mid   |            m6.6-eevdf-complete-pelt-fix              |   util   |  67.83%   |
>> |   big   |                   m6.6-stock                         |   load   |   0.0%    |
>> |   big   |                   m6.6-stock                         |   util   |   0.0%    |
>> |   big   |                m6.6-eevdf-complete                   |   load   |  97.39%   |
>> |   big   |                m6.6-eevdf-complete                   |   util   |  12.63%   |
>> |   big   |        m6.6-eevdf-complete-no-delay-dequeue          |   load   |  139.69%  |
>> |   big   |        m6.6-eevdf-complete-no-delay-dequeue          |   util   |  22.58%   |
>> |   big   | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero   |   load   |  125.36%  |
>> |   big   | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero   |   util   |  23.15%   |
>> |   big   |         m6.6-eevdf-complete-no-delay-zero            |   load   |  128.56%  |
>> |   big   |         m6.6-eevdf-complete-no-delay-zero            |   util   |  25.03%   |
>> |   big   |            m6.6-eevdf-complete-pelt-fix              |   load   |  130.73%  |
>> |   big   |            m6.6-eevdf-complete-pelt-fix              |   util   |  17.52%   |
>> +---------+------------------------------------------------------+----------+-----------+
> 
> Going through the code, my understanding is that the util_est functions seem to be getting
> called correctly, and in the right order. That is, we first util_est_enqueue, then util_est_dequeue
> and finally util_est_update. So the stats *should* be correct.
> 
> On dequeuing (dequeue_task_fair), we immediately call util_est_dequeue, even for the case of
> a DEQUEUE_DELAYED task, since we're no longer going to run the dequeue_delayed task for now, even
> though it is still in the rq.
> 
> We delay the util_est_update of dequeue_delayed tasks until a later time in dequeue_entities.
> 
> Eventually the dequeue_delayed task will have its lag zeroed when it becomes eligible again,
> (requeue_delayed_entity) while still being in the rq. It will then get dequeued/enqueued (requeued),
> and marked as a non-dequeue-delayed task.
> 
> Next time we attempt to enqueue such a task (enqueue_task_fair), it will skip the ENQUEUE_DELAYED
> block and call util_est_enqueue.
> 
> Still, something seems to be signalling that util/load is high, and causing migration to the big cores.
> 
> Maybe we're not decaying the util/load properly at some point, and inflated numbers start to happen.
> 
> I'll continue investigating.
> 

Just a quick update on this. While investigating this behavior, I
spotted very high loadavg values on an idle system. For instance:

load average: 4733.84, 4721.24, 4680.33

I wonder if someone else also spotted this.

These values keep increasing slowly but steadily. When the system
is under load they increase a bit more rapidly. Makes me wonder
if we're missing decrementing nr_uninterruptible in some path,
since that is what seems to be causing the loadavg to be off.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ