lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YMJiaO0IN2pN/EYY@hirez.programming.kicks-ass.net>
Date:   Thu, 10 Jun 2021 21:05:12 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Quentin Perret <qperret@...gle.com>
Cc:     mingo@...hat.com, vincent.guittot@...aro.org,
        dietmar.eggemann@....com, qais.yousef@....com, rickyiu@...gle.com,
        wvw@...gle.com, patrick.bellasi@...bug.net, xuewen.yan94@...il.com,
        linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v2 1/3] sched: Fix UCLAMP_FLAG_IDLE setting

On Thu, Jun 10, 2021 at 03:13:04PM +0000, Quentin Perret wrote:
> The UCLAMP_FLAG_IDLE flag is set on a runqueue when dequeueing the last
> active task to maintain the last uclamp.max and prevent blocked util
> from suddenly becoming visible.
> 
> However, there is an asymmetry in how the flag is set and cleared which
> can lead to having the flag set whilst there are active tasks on the rq.
> Specifically, the flag is cleared in the uclamp_rq_inc() path, which is
> called at enqueue time, but set in uclamp_rq_dec_id() which is called
> both when dequeueing a task _and_ in the update_uclamp_active() path. As
> a result, when both uclamp_rq_{dec,ind}_id() are called from
> update_uclamp_active(), the flag ends up being set but not cleared,
> hence leaving the runqueue in a broken state.
> 
> Fix this by setting the flag in the uclamp_rq_inc_id() path to ensure
> things remain symmetrical.

The code you moved is neither in uclamp_rq_inc_id(), although
uclamp_idle_reset() is called from there, nor does it _set_ the flag.

I'm thinking it's been a long warm day? ;-)

> 
> Fixes: e496187da710 ("sched/uclamp: Enforce last task's UCLAMP_MAX")
> Reported-by: Rick Yiu <rickyiu@...gle.com>
> Reviewed-by: Qais Yousef <qais.yousef@....com>
> Signed-off-by: Quentin Perret <qperret@...gle.com>
> ---
>  kernel/sched/core.c | 5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 5226cc26a095..3b213402798e 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -980,6 +980,7 @@ static inline void uclamp_idle_reset(struct rq *rq, enum uclamp_id clamp_id,
>  	if (!(rq->uclamp_flags & UCLAMP_FLAG_IDLE))
>  		return;
>  
> +	rq->uclamp_flags &= ~UCLAMP_FLAG_IDLE;
>  	WRITE_ONCE(rq->uclamp[clamp_id].value, clamp_value);
>  }
>  
> @@ -1252,10 +1253,6 @@ static inline void uclamp_rq_inc(struct rq *rq, struct task_struct *p)
>  
>  	for_each_clamp_id(clamp_id)
>  		uclamp_rq_inc_id(rq, p, clamp_id);
> -
> -	/* Reset clamp idle holding when there is one RUNNABLE task */
> -	if (rq->uclamp_flags & UCLAMP_FLAG_IDLE)
> -		rq->uclamp_flags &= ~UCLAMP_FLAG_IDLE;
>  }
>  
>  static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p)
> -- 
> 2.32.0.272.g935e593368-goog
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ