[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250805122316.1097085-1-quic_zhonhan@quicinc.com>
Date: Tue, 5 Aug 2025 20:23:16 +0800
From: Zhongqiu Han <quic_zhonhan@...cinc.com>
To: <mingo@...hat.com>, <peterz@...radead.org>, <juri.lelli@...hat.com>,
<vincent.guittot@...aro.org>, <dietmar.eggemann@....com>,
<rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>,
<vschneid@...hat.com>
CC: <linux-kernel@...r.kernel.org>, <quic_zhonhan@...cinc.com>
Subject: [PATCH] sched/fair: Update stale comments referencing last/skip buddy
Since the integration of EEVDF, the last/skip buddy scheduling features
have been removed. This patch updates outdated comments that still
reference these legacy behaviors to avoid inconsistencies.
Fixes: 5e963f2bd465 ("sched/fair: Commit to EEVDF")
Signed-off-by: Zhongqiu Han <quic_zhonhan@...cinc.com>
---
kernel/sched/fair.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b173a059315c..b3618aa075ec 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5500,11 +5500,11 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags);
/*
- * Pick the next process, keeping these things in mind, in this order:
- * 1) keep things fair between processes/task groups
- * 2) pick the "next" process, since someone really wants that to run
- * 3) pick the "last" process, for cache locality
- * 4) do not run the "skip" process, if something else is available
+ * Pick the next sched_entity to run from cfs_rq.
+ *
+ * Prefer ->next buddy if sched_feat(PICK_BUDDY) is enabled and it's eligible,
+ * to improve cache locality.
+ * Otherwise, pick the entity via EEVDF for fairness and latency control.
*/
static struct sched_entity *
pick_next_entity(struct rq *rq, struct cfs_rq *cfs_rq)
@@ -8673,9 +8673,9 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
*
* Note: this also catches the edge-case of curr being in a throttled
* group (e.g. via set_curr_task), since update_curr() (in the
- * enqueue of curr) will have resulted in resched being set. This
- * prevents us from potentially nominating it as a false LAST_BUDDY
- * below.
+ * enqueue of curr) will have resulted in resched being set. This
+ * prevents further preemption handling, including checks and potential
+ * reschedule triggering.
*/
if (test_tsk_need_resched(rq->curr))
return;
--
2.43.0
Powered by blists - more mailing lists