lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b5d83fcd-09fb-4680-a594-d4848fddc50a@arm.com>
Date:   Thu, 30 Nov 2023 14:42:43 +0100
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Vincent Guittot <vincent.guittot@...aro.org>, mingo@...hat.com,
        peterz@...radead.org, juri.lelli@...hat.com, rostedt@...dmis.org,
        bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com,
        vschneid@...hat.com, corbet@....net, alexs@...nel.org,
        siyanteng@...ngson.cn, qyousef@...alina.io,
        linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org
Cc:     lukasz.luba@....com, hongyan.xia2@....com
Subject: Re: [PATCH 2/2] sched/fair: Simplify util_est

On 27/11/2023 15:32, Vincent Guittot wrote:
> With UTIL_EST_FASTUP now being permanent, we can take advantage of the
> fact that the ewma jumps directly to a higher utilization at dequeue to
> simplify util_est and remove the enqueued field.
> 

Did some simple test with a ramp-up/ramp_down (10-80-10%) task affine to
a CPU.

https://nbviewer.org/github/deggeman/lisa/blob/ipynbs/ipynb/scratchpad/util_est_fastup.ipynb

LGTM.

[...]

> @@ -4879,27 +4865,22 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
>  	 * Skip update of task's estimated utilization when its members are
>  	 * already ~1% close to its last activation value.
>  	 */
> -	last_ewma_diff = ue.enqueued - ue.ewma;
> -	last_enqueued_diff -= ue.enqueued;
> -	if (within_margin(last_ewma_diff, UTIL_EST_MARGIN)) {
> -		if (!within_margin(last_enqueued_diff, UTIL_EST_MARGIN))
> -			goto done;
> -
> -		return;
> -	}
> +	last_ewma_diff = ewma - dequeued;
> +	if (last_ewma_diff < UTIL_EST_MARGIN)
> +		goto done;
>  
>  	/*
>  	 * To avoid overestimation of actual task utilization, skip updates if
>  	 * we cannot grant there is idle time in this CPU.
>  	 */
> -	if (task_util(p) > arch_scale_cpu_capacity(cpu_of(rq_of(cfs_rq))))
> +	if (dequeued > arch_scale_cpu_capacity(cpu_of(rq_of(cfs_rq))))
>  		return;

Not directly related to the changes: Should we not use `goto done` here
is well to rearm UTIL_AVG_UNCHANGED?

>  	/*
>  	 * To avoid underestimate of task utilization, skip updates of EWMA if
>  	 * we cannot grant that thread got all CPU time it wanted.
>  	 */
> -	if ((ue.enqueued + UTIL_EST_MARGIN) < task_runnable(p))
> +	if ((dequeued + UTIL_EST_MARGIN) < task_runnable(p))
>  		goto done;
>  
>  
> @@ -4914,18 +4895,18 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
>  	 *  ewma(t) = w *  task_util(p) + (1-w) * ewma(t-1)
>  	 *          = w *  task_util(p) +         ewma(t-1)  - w * ewma(t-1)
>  	 *          = w * (task_util(p) -         ewma(t-1)) +     ewma(t-1)
> -	 *          = w * (      last_ewma_diff            ) +     ewma(t-1)
> -	 *          = w * (last_ewma_diff  +  ewma(t-1) / w)
> +	 *          = w * (      -last_ewma_diff           ) +     ewma(t-1)
> +	 *          = w * (-last_ewma_diff +  ewma(t-1) / w)
>  	 *
>  	 * Where 'w' is the weight of new samples, which is configured to be
>  	 * 0.25, thus making w=1/4 ( >>= UTIL_EST_WEIGHT_SHIFT)
>  	 */

The text above still mentioned ue.enqueued and that we store the current
PELT value ... which isn't the case anymore.


> -	ue.ewma <<= UTIL_EST_WEIGHT_SHIFT;
> -	ue.ewma  += last_ewma_diff;
> -	ue.ewma >>= UTIL_EST_WEIGHT_SHIFT;
> +	ewma <<= UTIL_EST_WEIGHT_SHIFT;
> +	ewma  -= last_ewma_diff;
> +	ewma >>= UTIL_EST_WEIGHT_SHIFT;
>  done:
> -	ue.enqueued |= UTIL_AVG_UNCHANGED;
> -	WRITE_ONCE(p->se.avg.util_est, ue);
> +	ewma |= UTIL_AVG_UNCHANGED;
> +	WRITE_ONCE(p->se.avg.util_est, ewma);
>  
>  	trace_sched_util_est_se_tp(&p->se);
>  }

[...]

Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ