lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251008140827.GY4067720@noisy.programming.kicks-ass.net>
Date: Wed, 8 Oct 2025 16:08:27 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: mingo@...hat.com, juri.lelli@...hat.com, dietmar.eggemann@....com,
	rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
	vschneid@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/fair: Fix pelt lost idle time detection

On Wed, Oct 08, 2025 at 03:12:14PM +0200, Vincent Guittot wrote:
> The check for some lost idle pelt time should be always done when 
> pick_next_task_fair() fails to pick a task and not only when we call it
> from the fair fast-path.
> 
> The case happens when the last running task on rq is a RT or DL task. When
> the latter goes to sleep and the /Sum of util_sum of the rq is at the max
> value, we don't account the lost of idle time whereas we should.
> 
> Fixes: 67692435c411 ("sched: Rework pick_next_task() slow-path")
> Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>

Durr, sorry about that. Let me go queue this.

> ---
> 
> I Noticed this while reviewing [1]
> 
> [1] https://lore.kernel.org/all/20251006105453.648473106@infradead.org/
> 
>  kernel/sched/fair.c | 26 +++++++++++++-------------
>  1 file changed, 13 insertions(+), 13 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index b3be1e2749ce..dd0ea01af730 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8920,21 +8920,21 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
>  	return p;
>  
>  idle:
> -	if (!rf)
> -		return NULL;
> -
> -	new_tasks = sched_balance_newidle(rq, rf);
> +	if (rf) {
> +		new_tasks = sched_balance_newidle(rq, rf);
>  
> -	/*
> -	 * Because sched_balance_newidle() releases (and re-acquires) rq->lock, it is
> -	 * possible for any higher priority task to appear. In that case we
> -	 * must re-start the pick_next_entity() loop.
> -	 */
> -	if (new_tasks < 0)
> -		return RETRY_TASK;
> +		/*
> +		 * Because sched_balance_newidle() releases (and re-acquires)
> +		 * rq->lock, it is possible for any higher priority task to
> +		 * appear. In that case we must re-start the pick_next_entity()
> +		 * loop.
> +		 */
> +		if (new_tasks < 0)
> +			return RETRY_TASK;
>  
> -	if (new_tasks > 0)
> -		goto again;
> +		if (new_tasks > 0)
> +			goto again;
> +	}
>  
>  	/*
>  	 * rq is about to be idle, check if we need to update the
> -- 
> 2.43.0
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ