lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <adef8551-2739-4817-af3a-29357b2aab19@kylinos.cn>
Date: Thu, 10 Jul 2025 08:42:01 +0800
From: Zihuan Zhang <zhangzihuan@...inos.cn>
To: Christian Loehle <christian.loehle@....com>, mingo@...hat.com,
 peterz@...radead.org, juri.lelli@...hat.com, vincent.guittot@...aro.org,
 dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
 mgorman@...e.de, vschneid@...hat.com, patrick.bellasi@....com,
 qyousef@...alina.io, xuewen.yan@...soc.com
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] sched/uclamp: Initialize uclamp_rq alongside rq setup
 in sched_init()

Hi Christian,
Thanks for the feedback!

在 2025/7/9 18:25, Christian Loehle 写道:
> On 6/27/25 08:45, Zihuan Zhang wrote:
>> uclamp_rq is currently initialized for all possible CPUs in a separate
>> loop within init_uclamp(). This creates a dependency on the ordering of
>> sched_init() and init_uclamp(), and duplicates the per-CPU iteration.
>>
>> This patch simplifies the logic by moving uclamp_rq initialization into
>> sched_init(), immediately after each cpu_rq is initialized. This ensures
>> uclamprq setup is tightly coupled with rq setup and removes the need for
>> a redundant loop.
>>
>> Signed-off-by: Zihuan Zhang <zhangzihuan@...inos.cn>
>> ---
>>   kernel/sched/core.c | 8 +++-----
>>   1 file changed, 3 insertions(+), 5 deletions(-)
>>
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 8988d38d46a3..a160ec8645b2 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -1998,7 +1998,7 @@ static void uclamp_post_fork(struct task_struct *p)
>>   	uclamp_update_util_min_rt_default(p);
>>   }
>>   
>> -static void __init init_uclamp_rq(struct rq *rq)
>> +static void init_uclamp_rq(struct rq *rq)
>>   {
>>   	enum uclamp_id clamp_id;
>>   	struct uclamp_rq *uc_rq = rq->uclamp;
>> @@ -2016,10 +2016,6 @@ static void __init init_uclamp(void)
>>   {
>>   	struct uclamp_se uc_max = {};
>>   	enum uclamp_id clamp_id;
>> -	int cpu;
>> -
>> -	for_each_possible_cpu(cpu)
>> -		init_uclamp_rq(cpu_rq(cpu));
>>   
>>   	for_each_clamp_id(clamp_id) {
>>   		uclamp_se_set(&init_task.uclamp_req[clamp_id],
>> @@ -2043,6 +2039,7 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p) { }
>>   static inline void uclamp_fork(struct task_struct *p) { }
>>   static inline void uclamp_post_fork(struct task_struct *p) { }
>>   static inline void init_uclamp(void) { }
>> +static inline void init_uclamp_rq(struct rq *rq) {}
>>   #endif /* CONFIG_UCLAMP_TASK */
>>   
>>   bool sched_task_on_rq(struct task_struct *p)
>> @@ -8586,6 +8583,7 @@ void __init sched_init(void)
>>   		init_cfs_rq(&rq->cfs);
>>   		init_rt_rq(&rq->rt);
>>   		init_dl_rq(&rq->dl);
>> +		init_uclamp_rq(rq);
>>   #ifdef CONFIG_FAIR_GROUP_SCHED
>>   		INIT_LIST_HEAD(&rq->leaf_cfs_rq_list);
>>   		rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
>
> I don't necessarily prefer one over the other, both look fine to me FWIW.

Just to add one more point in favor of this change: by initializing 
uclamp_rq directly during the per-CPU rq setup in sched_init(), we avoid 
a separate for_each_possible_cpu() loop in init_uclamp().

This can reduce initialization overhead, especially on systems with 
hundreds or even thousands of CPUs.

The logic also becomes easier to maintain since each rq and its 
associated structures are initialized in one place.

Let me know if there's anything else I should address.


Best regards,
Zihuan Zhang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ