lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Mon, 24 Jun 2024 11:23:49 +0100
From: Hongyan Xia <hongyan.xia2@....com>
To: Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Qais Yousef <qyousef@...alina.io>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Lukasz Luba <lukasz.luba@....com>,
	Christian Loehle <christian.loehle@....com>,
	Pierre Gondois <pierre.gondois@....com>,
	Youssef Esmat <youssefesmat@...gle.com>,
	linux-kernel@...r.kernel.org
Subject: [PATCH 0/7] uclamp sum aggregation

Current uclamp implementation, max aggregation, has several drawbacks.
This series gives an alternative implementation that addresses the
problems and shows other advantages, mostly:

1. Simplicity. Sum aggregation implements uclamp with less than half of
   code than max aggregation.
2. Effectiveness. Sum aggregation shows better uclamp effectiveness,
   either in benchmark scores or more importantly, in energy efficiency.
3. Works on its own. No changes in cpufreq or other sub-systems are
   needed.
4. Low-overhead. No bucket operations and no need to tweak the number of
   buckets to balance between overhead and uclamp granularity.

The key idea of sum aggregation is fairly simple. Each task has a
util_avg_bias, which is obtained by:

    util_avg_bias = clamp(util_avg, uclamp_min, uclamp_max) - util_avg;

If a CPU has N tasks, p1, p2, p3... pN, then we sum the biases up and
obtain a rq total bias:

    rq_bias = util_avg_bias1 + util_avg_bias2... + util_avg_biasN;

Then we use the biased rq utilization rq_util + rq_bias to select OPP
and to schedule tasks.

PATCH BREAKDOWN:

Patch 1/7 reverts a patch that accommodate uclamp_max tasks under max
aggregation. This patch is not needed and creates other problems for sum
aggregation. It is discussed elsewhere that this patch will be improved
and there may not be the need to revert it in the future.

Patch 2, 3 and 4 implement sum aggregation.

Patch 5 and 6 remove max aggregation.

Patch 7 applies PELT decay on negative util_avg_bias. This improves
energy efficiency and task placement, but is not strictly necessary.

TESTING:

Two notebooks are shared at

https://nbviewer.org/github/honxia02/notebooks/blob/bb97afd74f49d4b8add8b28ad4378ea337c695a8/whitebox/max.ipynb
https://nbviewer.org/github/honxia02/notebooks/blob/bb97afd74f49d4b8add8b28ad4378ea337c695a8/whitebox/sum-offset.ipynb

The experiments done in notebooks are on Arm Juno r2 board. CPU0-3 are
little cores with capacity of 383. CPU4-5 are big cores. The rt-app
profiles used for these experiments are included in the notebooks.

Scenario 1: Scheduling 4 tasks with UCLAMP_MAX at 110.

The scheduling decisions are plotted in Out[11]. Both max and sum
aggregation understand the UCLAMP_MAX hint and schedule all 4 tasks on
the little cluster. Max aggregation sometimes schedule 2 tasks on 1 CPU,
and this is the reason why sum aggregation reverts the 1st commit.
However, the reverted patch may be improved and this revert may not be
needed in the future.

Scenario 2: Scheduling 2 tasks with UCLAMP_MIN and UCLAMP_MAX at a value
slightly above the capacity of the little CPU.

Results are in Out[17]. The purpose is to use UCLAMP_MIN to place tasks
on the big core. Both max and sum aggregation handle this correctly.

Scenario 3: Task A is a task with a small utilization pinned to CPU4.
Task B is an always-running task pinned to CPU5, but UCLAMP_MAX capped
at 300. After a while, task A is then pinned to CPU5, joining B.

Results are in Out[23]. Max aggregation sees a frequency spike at
239.75s. When zoomed in, one can see square-wave-like utilization values
because of A periodically going to sleep. When A wakes up, its default
UCLAMP_MAX of 1024 will uncap B and reach the highest CPU frequency.
When A sleeps, B's UCLAMP_MAX will be in effect and will reduce rq
utilization. This happens repeatedly, hence the square wave. In
contrast, sum aggregation sees a normal increase in utilization when A
joins B, without any square-wave behavior.

Scenario 4: 4 always-running tasks with UCLAMP_MAX of 110 pinned to the
little PD (CPU0-3). 4 same tasks pinned to the big PD (CPU4-5).
After a while, remove the CPU pinning of the 4 tasks on the big PD.

Results are in Out[29]. After unpinning, max aggregation moves all 8
tasks to the little cluster, but schedules 5 tasks on CPU0 and 1 each on
CPU1-3. In contrast, sum aggregation schedules 2 on each little CPU
after unpinning, which is the desired balanced task placement.

Same as Scenario 1, the situation may not be as bad once the improvement
of the reverted patch comes out in the future.

Scenario 5: Scheduling 8 tasks with UCLAMP_MAX of 110.

Results are in Out[35] and Out[36]. There's no doubt that sum
aggregation yields substantially better scheduling decisions. This tests
roughly the same thing as Scenario 4.

EVALUATION:

We backport patches to kernel v6.1 on Pixel 6 and run Android
benchmarks.

Speedometer:

We run Speedometer 2.0 to test ADPF/uclamp effectiveness. Because sum
aggregation does not circumvent the 20% OPP margin, we reduce uclamp
values to 80% to be fair.

------------------------------------------------------
|   score   | score |   %   | CPU power (mW) |   %   |
|    max    | 161.4 |       |    2358.9      |       |
|  sum_0.8  | 166.0 | +2.85 |    2485.0      | +5.35 |
| sum_tuned | 162.6 | +0.74 |    2332.0      | -1.14 |
------------------------------------------------------

We see a consistant higher score and higher average power consumption.
Note that a higher score also means a reduction in run-time, so total
energy increase for sum_0.8 is only 1.88%.

We then reduce uclamp values so that the Speedometer score is roughly
the same. If we do so, then sum aggregation actually gives a reduced
average power and total energy consumption than max aggregation.

UIBench:

-----------------------------------------------------------------
|   score   | jank percentage |   %    | CPU power (mW) |   %   |
|    max    |     0.375%      |        |     122.75     |       |
|  sum_0.8  |     0.440%      | +17.33 |     116.35     | -5.21 |
| sum_tuned |     0.220%      | -41.33 |     119.35     | -2.77 |
-----------------------------------------------------------------

UIBench on Pixel 6 by default already has a low enough jank percentage.
Moving to sum aggregation gives higher jank percentage and lower power
consumption. We then tune the hardcoded uclamp values in the Android
image to take advantage of the power budget, and can achieve more than
41% jank reduction while still operating with less power consumption
than max aggregation.

This result is not suggesting that sum aggregation greatly outperforms
max aggregation, because the jank percentage is already very low, but
instead suggests that hardcoded uclamp values in the system (like in
init scripts) need to be changed to perform well under sum aggregation.
If tuned well, sum aggregation generally shows better effectiveness, or
the same effectiveness but with less power consumption.

---
Changed in v1 since RFCv3:
- Rebase and add comments as suggested.
- Move util_est changes into a separate patch.
- Fix build failures on !CONFIG_SMP detected by Intel kernel bot.
- Avoid duplicate calls to util() and util_uclamp() functions by using
  the static key.

Changed in RFCv3:
- Addresses the biggest concern from multiple people, that PELT and
  uclamp need to be separate. The new design is significantly simpler
  than the previous revision and separates util_avg_uclamp into the
  original util_avg (which this series doesn't touch at all) and the
  util_avg_bias component.
- Keep the tri-state return value of util_fits_cpu().
- Keep both the unclamped and clamped util_est, so that we use the right
  one depending on the caller in frequency or energy calculations.


Hongyan Xia (7):
  Revert "sched/uclamp: Set max_spare_cap_cpu even if max_spare_cap is
    0"
  sched/uclamp: Track a new util_avg_bias signal
  sched/uclamp: Add util_est_uclamp
  sched/fair: Use util biases for utilization and frequency
  sched/uclamp: Remove all uclamp bucket logic
  sched/uclamp: Simplify uclamp_eff_value()
  Propagate negative bias

 include/linux/sched.h            |   8 +-
 init/Kconfig                     |  32 ---
 kernel/sched/core.c              | 294 ++--------------------
 kernel/sched/cpufreq_schedutil.c |  12 +-
 kernel/sched/debug.c             |   2 +-
 kernel/sched/fair.c              | 419 ++++++++++++++++---------------
 kernel/sched/pelt.c              |  37 +++
 kernel/sched/rt.c                |   4 -
 kernel/sched/sched.h             | 131 +++-------
 kernel/sched/syscalls.c          |  16 +-
 10 files changed, 312 insertions(+), 643 deletions(-)

-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ