lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 10 Nov 2022 12:45:18 +0000
From:   Kajetan Puchalski <kajetan.puchalski@....com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Jian-Min Liu <jian-min.liu@...iatek.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Ingo Molnar <mingo@...nel.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Vincent Donnefort <vdonnefort@...gle.com>,
        Quentin Perret <qperret@...gle.com>,
        Patrick Bellasi <patrick.bellasi@...bug.net>,
        Abhijeet Dharmapurikar <adharmap@...cinc.com>,
        Qais Yousef <qais.yousef@....com>,
        linux-kernel@...r.kernel.org,
        Jonathan JMChen <jonathan.jmchen@...iatek.com>
Subject: Re: [RFC PATCH 0/1] sched/pelt: Change PELT halflife at runtime

Hi,

> Would something terrible like the below help some?
> 
> If not, I suppose it could be modified to take the current state as
> history. But basically it runs a faster pelt sum along side the regular
> signal just for ramping up the frequency.

As Dietmar mentioned in the other email, there seems to be an issue with
how the patch computes 'runtime'. Nevertheless I tested it just to see
what would happen so here are the results if you're interested.

Here's a comparison of Jankbench results on a normal system vs pelt_4 vs
performance cpufreq governor vs your pelt_rampup patch.

Max frame duration (ms)

+-----------------------+-----------+------------+
|        kernel         | iteration |   value    |
+-----------------------+-----------+------------+
|       menu            |    10     | 142.973401 |
|   menu_pelt_4         |    10     | 85.271279  |
|   menu_pelt_rampup    |    10     | 61.494636  |
|   menu_performance    |    10     | 40.930829  |
+-----------------------+-----------+------------+

Power usage [mW]

+--------------+-----------------------+-------+-----------+
|  chan_name   |        kernel         | value | perc_diff |
+--------------+-----------------------+-------+-----------+
| total_power  |       menu            | 144.6 |   0.0%    |
| total_power  |   menu_pelt_4         | 158.5 |   9.63%   |
| total_power  |   menu_pelt_rampup    | 272.1 |  88.23%   |
| total_power  |   menu_performance    | 485.6 |  235.9%   |
+--------------+-----------------------+-------+-----------+


Mean frame duration (ms)

+---------------+-----------------------+-------+-----------+
|   variable    |        kernel         | value | perc_diff |
+---------------+-----------------------+-------+-----------+
| mean_duration |       menu            | 13.9  |   0.0%    |
| mean_duration |   menu_pelt_4         | 14.5  |   4.74%   |
| mean_duration |   menu_pelt_rampup    |  8.3  |  -40.31%  |
| mean_duration |   menu_performance    |  4.4  |  -68.13%  |
+---------------+-----------------------+-------+-----------+

Jank percentage

+-----------+-----------------------+-------+-----------+
| variable  |        kernel         | value | perc_diff |
+-----------+-----------------------+-------+-----------+
| jank_perc |       menu            |  1.5  |   0.0%    |
| jank_perc |   menu_pelt_4         |  2.0  |  30.08%   |
| jank_perc |   menu_pelt_rampup    |  0.1  |  -93.09%  |
| jank_perc |   menu_performance    |  0.1  |  -96.29%  |
+-----------+-----------------------+-------+-----------+

[...]

Some variant of this that's tunable at runtime could be workable for the
purposes described before. At least this further proves that it's manipulating
frequency that's responsible for the results here.

---
Kajetan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ