lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YDPajlnvgkonocpp@google.com>
Date:   Mon, 22 Feb 2021 16:23:42 +0000
From:   Quentin Perret <qperret@...gle.com>
To:     Vincent Donnefort <vincent.donnefort@....com>
Cc:     peterz@...radead.org, mingo@...hat.com, vincent.guittot@...aro.org,
        dietmar.eggemann@....com, linux-kernel@...r.kernel.org,
        patrick.bellasi@...bug.net, valentin.schneider@....com
Subject: Re: [PATCH] sched/fair: Fix task utilization accountability in
 cpu_util_next()

On Monday 22 Feb 2021 at 15:58:56 (+0000), Quentin Perret wrote:
> But in any case, if we're going to address this, I'm still not sure this
> patch will be what we want. As per my first comment we need to keep the
> frequency estimation right.

Totally untested, but I think in principle you would like something like
the snippet below. Would that work?

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 04a3ce20da67..6594d875c6ac 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6534,8 +6534,13 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
         * its pd list and will not be accounted by compute_energy().
         */
        for_each_cpu_and(cpu, pd_mask, cpu_online_mask) {
-               unsigned long cpu_util, util_cfs = cpu_util_next(cpu, p, dst_cpu);
+               unsigned long util_freq = cpu_util_next(cpu, p, dst_cpu);
+               unsigned long util_running = cpu_util_without(cpu, p);
                struct task_struct *tsk = cpu == dst_cpu ? p : NULL;
+               unsigned long cpu_util;
+
+               if (cpu == dst_cpu)
+                       util_running += task_util_est();

                /*
                 * Busy time computation: utilization clamping is not
@@ -6543,7 +6548,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
                 * is already enough to scale the EM reported power
                 * consumption at the (eventually clamped) cpu_capacity.
                 */
-               sum_util += schedutil_cpu_util(cpu, util_cfs, cpu_cap,
+               sum_util += schedutil_cpu_util(cpu, util_running, cpu_cap,
                                               ENERGY_UTIL, NULL);

                /*
@@ -6553,7 +6558,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
                 * NOTE: in case RT tasks are running, by default the
                 * FREQUENCY_UTIL's utilization can be max OPP.
                 */
-               cpu_util = schedutil_cpu_util(cpu, util_cfs, cpu_cap,
+               cpu_util = schedutil_cpu_util(cpu, util_freq, cpu_cap,
                                              FREQUENCY_UTIL, tsk);
                max_util = max(max_util, cpu_util);
        }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ