lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250314090909.8404-1-xuewen.yan@unisoc.com>
Date: Fri, 14 Mar 2025 17:09:09 +0800
From: Xuewen Yan <xuewen.yan@...soc.com>
To: <dietmar.eggemann@....com>, <vincent.guittot@...aro.org>,
        <mingo@...hat.com>, <peterz@...radead.org>, <juri.lelli@...hat.com>
CC: <rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>,
        <vschneid@...hat.co>, <linux-kernel@...r.kernel.org>,
        <hongyan.xia2@....com>, <qyousef@...alina.io>, <ke.wang@...soc.com>,
        <di.shen@...soc.com>, <xuewen.yan94@...il.com>
Subject: [RFC PATCH] sched/util_est: Do not sub the delayed-task's util-est

In cpu_util_without, When the task is in rq, we should
sub the task's util_est, however, the delayed_task->on_rq
is true, however, the delayed_task's util had been sub
when sleep, so there is no need to sub the delayed task's
util-est. So add the checking of delayed-task.

On the other hand, as said in [1], the logic of util_est's
enqueue/dequeue could be simplified.
So simplify it by aligning with the conditions of uclamp.

[1]https://lore.kernel.org/all/CAB8ipk8pEvOtCm-d0o1rsekwxPWUHk9iBGtt9TLTWW-iWTQKiA@mail.gmail.com/

Signed-off-by: Xuewen Yan <xuewen.yan@...soc.com>
---
 kernel/sched/fair.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c798d2795243..bebf40a0fa4e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6930,7 +6930,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	 * Let's add the task's estimated utilization to the cfs_rq's
 	 * estimated utilization, before we update schedutil.
 	 */
-	if (!(p->se.sched_delayed && (task_on_rq_migrating(p) || (flags & ENQUEUE_RESTORE))))
+	if (!p->se.sched_delayed || (flags & ENQUEUE_DELAYED))
 		util_est_enqueue(&rq->cfs, p);
 
 	if (flags & ENQUEUE_DELAYED) {
@@ -7168,7 +7168,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
  */
 static bool dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 {
-	if (!(p->se.sched_delayed && (task_on_rq_migrating(p) || (flags & DEQUEUE_SAVE))))
+	if (!p->se.sched_delayed)
 		util_est_dequeue(&rq->cfs, p);
 
 	util_est_update(&rq->cfs, p, flags & DEQUEUE_SLEEP);
@@ -8037,7 +8037,8 @@ cpu_util(int cpu, struct task_struct *p, int dst_cpu, int boost)
 		 */
 		if (dst_cpu == cpu)
 			util_est += _task_util_est(p);
-		else if (p && unlikely(task_on_rq_queued(p) || current == p))
+		else if (p && unlikely(current == p ||
+			 (task_on_rq_queued(p) && !p->se.sched_delayed)))
 			lsub_positive(&util_est, _task_util_est(p));
 
 		util = max(util, util_est);
-- 
2.25.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ