[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20251218130211.32785-1-zhanxusheng@xiaomi.com>
Date: Thu, 18 Dec 2025 21:02:11 +0800
From: Zhan Xusheng <zhanxusheng1024@...il.com>
To: linux-kernel@...r.kernel.org
Cc: shkaushik@...erecomputing.com,
Zhan Xusheng <zhanxusheng@...omi.com>
Subject: [PATCH v2] sched/fair: factor out common sched_entity stats/task lookup
Several fair scheduler helpers (update_stats_*_fair()) repeat the same
boilerplate code to retrieve sched_statistics and the associated task
(if any) from a sched_entity.
This patch factors that common logic into a single small helper:
static __always_inline void get_se_stats_and_task(...)
The helper reduces code duplication and improves readability without
changing behavior or control flow.
Based on feedback from Shubhang Kaushik Prasanna Kumar
(<shkaushik@...erecomputing.com>), we changed the helper from inline to
__always_inline to guarantee inlining in critical hot paths and avoid
any potential overhead.
Although the helper takes addresses of local variables (&stats, &p),
assembly comparison for enqueue_task_fair() before/after this patch
shows no additional loads, spills, or function calls:
mov: 187 vs 187
add: 13 vs 13
lea: 22 vs 22
call: 20 vs 20
push: 6 vs 6
pop: 12 vs 12
This confirms that compiler optimizations are preserved and no
performance regression is expected in these hot paths.
Signed-off-by: Zhan Xusheng <zhanxusheng@...omi.com>
---
kernel/sched/fair.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index da46c3164537..bee30bfca6e5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1243,6 +1243,15 @@ static void update_curr_fair(struct rq *rq)
update_curr(cfs_rq_of(&rq->donor->se));
}
+static __always_inline
+void get_se_stats_and_task(struct sched_entity *se,
+ struct sched_statistics **stats,
+ struct task_struct **p)
+{
+ *stats = __schedstats_from_se(se);
+ *p = entity_is_task(se) ? task_of(se) : NULL;
+}
+
static inline void
update_stats_wait_start_fair(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
@@ -1252,10 +1261,7 @@ update_stats_wait_start_fair(struct cfs_rq *cfs_rq, struct sched_entity *se)
if (!schedstat_enabled())
return;
- stats = __schedstats_from_se(se);
-
- if (entity_is_task(se))
- p = task_of(se);
+ get_se_stats_and_task(se, &stats, &p);
__update_stats_wait_start(rq_of(cfs_rq), p, stats);
}
@@ -1269,7 +1275,7 @@ update_stats_wait_end_fair(struct cfs_rq *cfs_rq, struct sched_entity *se)
if (!schedstat_enabled())
return;
- stats = __schedstats_from_se(se);
+ get_se_stats_and_task(se, &stats, &p);
/*
* When the sched_schedstat changes from 0 to 1, some sched se
@@ -1280,9 +1286,6 @@ update_stats_wait_end_fair(struct cfs_rq *cfs_rq, struct sched_entity *se)
if (unlikely(!schedstat_val(stats->wait_start)))
return;
- if (entity_is_task(se))
- p = task_of(se);
-
__update_stats_wait_end(rq_of(cfs_rq), p, stats);
}
@@ -1295,10 +1298,7 @@ update_stats_enqueue_sleeper_fair(struct cfs_rq *cfs_rq, struct sched_entity *se
if (!schedstat_enabled())
return;
- stats = __schedstats_from_se(se);
-
- if (entity_is_task(se))
- tsk = task_of(se);
+ get_se_stats_and_task(se, &stats, &tsk);
__update_stats_enqueue_sleeper(rq_of(cfs_rq), tsk, stats);
}
--
2.43.0
Powered by blists - more mailing lists