lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d8a805d0-74ba-44dd-b5f7-8d2373c9538f@arm.com>
Date: Wed, 9 Jul 2025 11:25:09 +0100
From: Christian Loehle <christian.loehle@....com>
To: Zihuan Zhang <zhangzihuan@...inos.cn>, mingo@...hat.com,
 peterz@...radead.org, juri.lelli@...hat.com, vincent.guittot@...aro.org,
 dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
 mgorman@...e.de, vschneid@...hat.com, patrick.bellasi@....com,
 qyousef@...alina.io, xuewen.yan@...soc.com
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] sched/uclamp: Initialize uclamp_rq alongside rq setup
 in sched_init()

On 6/27/25 08:45, Zihuan Zhang wrote:
> uclamp_rq is currently initialized for all possible CPUs in a separate
> loop within init_uclamp(). This creates a dependency on the ordering of
> sched_init() and init_uclamp(), and duplicates the per-CPU iteration.
> 
> This patch simplifies the logic by moving uclamp_rq initialization into
> sched_init(), immediately after each cpu_rq is initialized. This ensures
> uclamprq setup is tightly coupled with rq setup and removes the need for
> a redundant loop.
> 
> Signed-off-by: Zihuan Zhang <zhangzihuan@...inos.cn>
> ---
>  kernel/sched/core.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 8988d38d46a3..a160ec8645b2 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1998,7 +1998,7 @@ static void uclamp_post_fork(struct task_struct *p)
>  	uclamp_update_util_min_rt_default(p);
>  }
>  
> -static void __init init_uclamp_rq(struct rq *rq)
> +static void init_uclamp_rq(struct rq *rq)
>  {
>  	enum uclamp_id clamp_id;
>  	struct uclamp_rq *uc_rq = rq->uclamp;
> @@ -2016,10 +2016,6 @@ static void __init init_uclamp(void)
>  {
>  	struct uclamp_se uc_max = {};
>  	enum uclamp_id clamp_id;
> -	int cpu;
> -
> -	for_each_possible_cpu(cpu)
> -		init_uclamp_rq(cpu_rq(cpu));
>  
>  	for_each_clamp_id(clamp_id) {
>  		uclamp_se_set(&init_task.uclamp_req[clamp_id],
> @@ -2043,6 +2039,7 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p) { }
>  static inline void uclamp_fork(struct task_struct *p) { }
>  static inline void uclamp_post_fork(struct task_struct *p) { }
>  static inline void init_uclamp(void) { }
> +static inline void init_uclamp_rq(struct rq *rq) {}
>  #endif /* CONFIG_UCLAMP_TASK */
>  
>  bool sched_task_on_rq(struct task_struct *p)
> @@ -8586,6 +8583,7 @@ void __init sched_init(void)
>  		init_cfs_rq(&rq->cfs);
>  		init_rt_rq(&rq->rt);
>  		init_dl_rq(&rq->dl);
> +		init_uclamp_rq(rq);
>  #ifdef CONFIG_FAIR_GROUP_SCHED
>  		INIT_LIST_HEAD(&rq->leaf_cfs_rq_list);
>  		rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;


I don't necessarily prefer one over the other, both look fine to me FWIW.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ