lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20251217070040.8723-1-zhanxusheng@xiaomi.com>
Date: Wed, 17 Dec 2025 15:00:40 +0800
From: Zhan Xusheng <zhanxusheng1024@...il.com>
To: peterz@...radead.org
Cc: vincent.guittot@...aro.org,
	dietmar.eggemann@....com,
	linux-sched@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	Zhan Xusheng <zhanxusheng@...omi.com>
Subject: [PATCH] sched/fair: factor out common sched_entity stats/task lookup

The fair scheduler has several update_stats_*_fair() helpers which
open-code the same boilerplate to retrieve sched_statistics and the
associated task (if any) from a sched_entity.

Factor this common logic into a small static inline helper to reduce
duplication and improve readability, without changing behaviour or
control flow.

No functional change intended.

Signed-off-by: Zhan Xusheng <zhanxusheng@...omi.com>
---
 kernel/sched/fair.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index da46c3164537..b4a9319a5753 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1243,6 +1243,15 @@ static void update_curr_fair(struct rq *rq)
 	update_curr(cfs_rq_of(&rq->donor->se));
 }
 
+static inline void
+get_se_stats_and_task(struct sched_entity *se,
+		struct sched_statistics **stats,
+		struct task_struct **p)
+{
+	*stats = __schedstats_from_se(se);
+	*p = entity_is_task(se) ? task_of(se) : NULL;
+}
+
 static inline void
 update_stats_wait_start_fair(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
@@ -1252,10 +1261,7 @@ update_stats_wait_start_fair(struct cfs_rq *cfs_rq, struct sched_entity *se)
 	if (!schedstat_enabled())
 		return;
 
-	stats = __schedstats_from_se(se);
-
-	if (entity_is_task(se))
-		p = task_of(se);
+	get_se_stats_and_task(se, &stats, &p);
 
 	__update_stats_wait_start(rq_of(cfs_rq), p, stats);
 }
@@ -1269,7 +1275,7 @@ update_stats_wait_end_fair(struct cfs_rq *cfs_rq, struct sched_entity *se)
 	if (!schedstat_enabled())
 		return;
 
-	stats = __schedstats_from_se(se);
+	get_se_stats_and_task(se, &stats, &p);
 
 	/*
 	 * When the sched_schedstat changes from 0 to 1, some sched se
@@ -1280,9 +1286,6 @@ update_stats_wait_end_fair(struct cfs_rq *cfs_rq, struct sched_entity *se)
 	if (unlikely(!schedstat_val(stats->wait_start)))
 		return;
 
-	if (entity_is_task(se))
-		p = task_of(se);
-
 	__update_stats_wait_end(rq_of(cfs_rq), p, stats);
 }
 
@@ -1295,10 +1298,7 @@ update_stats_enqueue_sleeper_fair(struct cfs_rq *cfs_rq, struct sched_entity *se
 	if (!schedstat_enabled())
 		return;
 
-	stats = __schedstats_from_se(se);
-
-	if (entity_is_task(se))
-		tsk = task_of(se);
+	get_se_stats_and_task(se, &stats, &p);
 
 	__update_stats_enqueue_sleeper(rq_of(cfs_rq), tsk, stats);
 }
-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ