lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c507c4ce-25c8-45d2-ad27-53ade0a58d40@arm.com>
Date: Thu, 27 Feb 2025 13:41:53 +0000
From: Hongyan Xia <hongyan.xia2@....com>
To: Christian Loehle <christian.loehle@....com>,
 Xuewen Yan <xuewen.yan@...soc.com>, peterz@...radead.org, mingo@...hat.com,
 juri.lelli@...hat.com, vincent.guittot@...aro.org, dietmar.eggemann@....com,
 Pierre Gondois <pierre.gondois@....com>, Luis Machado <luis.machado@....com>
Cc: rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
 vschneid@...hat.com, ke.wang@...soc.com, di.shen@...soc.com,
 xuewen.yan94@...il.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] sched/fair: Prevent from cpufreq not being updated
 when delayed-task is iowait

On 26/02/2025 12:08, Christian Loehle wrote:
> On 2/26/25 11:43, Xuewen Yan wrote:
>> Because the sched-delayed task maybe in io-wait state,
>> so we should place the requeue_delayed_entity() after the
>> cpufreq_update_util(), to prevent not boosting iowait cpufreq
>> before return.
>>
>> Signed-off-by: Xuewen Yan <xuewen.yan@...soc.com>
>> ---
>>   kernel/sched/fair.c | 10 +++++-----
>>   1 file changed, 5 insertions(+), 5 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 2d6d5582c3e9..040674734128 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -6931,11 +6931,6 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>>   	if (!(p->se.sched_delayed && (task_on_rq_migrating(p) || (flags & ENQUEUE_RESTORE))))
>>   		util_est_enqueue(&rq->cfs, p);
>>   
>> -	if (flags & ENQUEUE_DELAYED) {
>> -		requeue_delayed_entity(se);
>> -		return;
>> -	}
>> -
>>   	/*
>>   	 * If in_iowait is set, the code below may not trigger any cpufreq
>>   	 * utilization updates, so do it here explicitly with the IOWAIT flag
>> @@ -6944,6 +6939,11 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>>   	if (p->in_iowait)
>>   		cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT);
>>   
>> +	if (flags & ENQUEUE_DELAYED) {
>> +		requeue_delayed_entity(se);
>> +		return;
>> +	}
>> +
>>   	if (task_new && se->sched_delayed)
>>   		h_nr_runnable = 0;
>>   
> 
> I understand that iowait cpufreq update isn't happening now (and that's a bug),
> but if we reorder we may call cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT)
> followed by the cpufreq_update_util() in update_load_avg() of
> requeue_delayed_entity()
> 	update_load_avg()
> 		cpufreq_update_util()
> 
> and the latter will likely be dropped by the governor, so the update
> won't include util of the (re)-enqueuing task, right?
> 
> I'll give it some more thought.

True, but I think the code was like this before anyway. On the 
non-delayed path, the problem you mentioned still exists. Not saying 
this is the right thing, but just saying this is what it has always been 
like.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ