lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 25 Feb 2021 12:45:06 +0100
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     vincent.donnefort@....com, peterz@...radead.org, mingo@...hat.com,
        vincent.guittot@...aro.org
Cc:     linux-kernel@...r.kernel.org, qperret@...gle.com,
        patrick.bellasi@...bug.net, valentin.schneider@....com
Subject: Re: [PATCH v2 1/2] sched/fair: Fix task utilization accountability in
 compute_energy()

On 25/02/2021 09:36, vincent.donnefort@....com wrote:
> From: Vincent Donnefort <vincent.donnefort@....com>

[...]

> cpu_util_next() estimates the CPU utilization that would happen if the
> task was placed on dst_cpu as follows:
> 
>   max(cpu_util + task_util, cpu_util_est + _task_util_est)
> 
> The task contribution to the energy delta can then be either:
> 
>   (1) _task_util_est, on a mostly idle CPU, where cpu_util is close to 0
>       and _task_util_est > cpu_util.
>   (2) task_util, on a mostly busy CPU, where cpu_util > _task_util_est.
> 
>   (cpu_util_est doesn't appear here. It is 0 when a CPU is idle and
>    otherwise must be small enough so that feec() takes the CPU as a
>    potential target for the task placement)

I still don't quite get the reasoning for (2) why task_util is used as
task contribution.

So we use 'cpu_util + task_util' instead of 'cpu_util_est +
_task_util_est' in (2).

I.e. since _task_util_est is always >= task_util during wakeup, cpu_util
must be > cpu_util_est (by more than _task_util_est - task_util).

I can see it for a CPU whose cpu_util has a fair amount of contributions
from blocked tasks which cpu_util_est wouldn't have.

[...]

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 7043bb0f2621..146ac9fec4b6 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6573,8 +6573,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>  	 * its pd list and will not be accounted by compute_energy().
>  	 */
>  	for_each_cpu_and(cpu, pd_mask, cpu_online_mask) {
> -		unsigned long cpu_util, util_cfs = cpu_util_next(cpu, p, dst_cpu);
> -		struct task_struct *tsk = cpu == dst_cpu ? p : NULL;
> +		unsigned long util_freq = cpu_util_next(cpu, p, dst_cpu);
> +		unsigned long cpu_util, util_running = util_freq;
> +		struct task_struct *tsk = NULL;
> +
> +		/*
> +		 * When @p is placed on @cpu:
> +		 *
> +		 * util_running = max(cpu_util, cpu_util_est) +
> +		 *		  max(task_util, _task_util_est)
> +		 *
> +		 * while cpu_util_next is: max(cpu_util + task_util,
> +		 *			       cpu_util_est + _task_util_est)
> +		 */

Nit pick:

s/on @cpu/on @dst_cpu ?

s/while cpu_util_next is/while cpu_util_next(cpu, p, cpu) would be

If dst_cpu != cpu (including dst_cpu == -1) task_util and _task_util_est
are not added to util resp. util_est.

Not sure if this is clear from the source code here?

[...]

Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ