[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20241031-eevdf-doc-v2-1-8de4ed583f67@linaro.org>
Date: Thu, 31 Oct 2024 21:55:17 +0100
From: Linus Walleij <linus.walleij@...aro.org>
To: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>
Cc: linux-kernel@...r.kernel.org, Linus Walleij <linus.walleij@...aro.org>
Subject: [PATCH v2] sched/eevdf: Fix documentation to refer to EEVDF
The references to "CFS" is a bit misleading these days since
the scheduling principe is EEVDF.
Rewrite the top level comment, and trim other comments and
kerneldoc to implicitly refer to the scheduling implemented
in this file, or just "the fair scheduler".
Signed-off-by: Linus Walleij <linus.walleij@...aro.org>
---
Changes in v2:
- Rebase on v6.12-rc1
- Tweak subject to make it more to the point
- Link to v1: https://lore.kernel.org/r/20240625-eevdf-doc-v1-1-215da9eb9354@linaro.org
---
kernel/sched/fair.c | 31 +++++++++++++++++--------------
1 file changed, 17 insertions(+), 14 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 225b31aaee55..eda6d1fe4257 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0
/*
- * Completely Fair Scheduling (CFS) Class (SCHED_NORMAL/SCHED_BATCH)
+ * Earliest Elegible Deadline First (EEVDF) Class (SCHED_NORMAL/SCHED_BATCH)
+ * also known as the fair time-sharing scheduler, refactored from the
+ * Completely Fair Scheduler (CFS).
*
* Copyright (C) 2007 Red Hat, Inc., Ingo Molnar <mingo@...hat.com>
*
@@ -17,7 +19,8 @@
* Scaled math optimizations by Thomas Gleixner
* Copyright (C) 2007, Thomas Gleixner <tglx@...utronix.de>
*
- * Adaptive scheduling granularity, math enhancements by Peter Zijlstra
+ * Adaptive scheduling granularity, math enhancements and rewrite to EEVDF
+ * by Peter Zijlstra
* Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra
*/
#include <linux/energy_model.h>
@@ -297,7 +300,7 @@ static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se)
const struct sched_class fair_sched_class;
/**************************************************************
- * CFS operations on generic schedulable entities:
+ * Operations on generic schedulable entities:
*/
#ifdef CONFIG_FAIR_GROUP_SCHED
@@ -5691,7 +5694,7 @@ entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued)
/**************************************************
- * CFS bandwidth control machinery
+ * Bandwidth control machinery
*/
#ifdef CONFIG_CFS_BANDWIDTH
@@ -6802,7 +6805,7 @@ static inline void sched_fair_update_stop_tick(struct rq *rq, struct task_struct
#endif
/**************************************************
- * CFS operations on tasks:
+ * Operations on tasks:
*/
#ifdef CONFIG_SCHED_HRTICK
@@ -7958,7 +7961,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
}
/**
- * cpu_util() - Estimates the amount of CPU capacity used by CFS tasks.
+ * cpu_util() - Estimates the amount of CPU capacity used by tasks.
* @cpu: the CPU to get the utilization for
* @p: task for which the CPU utilization should be predicted or NULL
* @dst_cpu: CPU @p migrates to, -1 if @p moves from @cpu or @p == NULL
@@ -7969,7 +7972,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
*
* CPU utilization is the sum of running time of runnable tasks plus the
* recent utilization of currently non-runnable tasks on that CPU.
- * It represents the amount of CPU capacity currently used by CFS tasks in
+ * It represents the amount of CPU capacity currently used by tasks in
* the range [0..max CPU capacity] with max CPU capacity being the CPU
* capacity at f_max.
*
@@ -7981,7 +7984,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
* of such a task would be significantly decayed at this point of time.
*
* Boosted CPU utilization is defined as max(CPU runnable, CPU utilization).
- * CPU contention for CFS tasks can be detected by CPU runnable > CPU
+ * CPU contention for tasks can be detected by CPU runnable > CPU
* utilization. Boosting is implemented in cpu_util() so that internal
* users (e.g. EAS) can use it next to external users (e.g. schedutil),
* latter via cpu_util_cfs_boost().
@@ -9721,7 +9724,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
/*
* update_load_avg() can call cpufreq_update_util(). Make sure that RT,
- * DL and IRQ signals have been updated before updating CFS.
+ * DL and IRQ signals have been updated before updating the scheduler.
*/
updated = update_other_load_avgs(rq);
@@ -9872,7 +9875,7 @@ struct sg_lb_stats {
unsigned long group_util; /* Total utilization over the CPUs of the group */
unsigned long group_runnable; /* Total runnable time over the CPUs of the group */
unsigned int sum_nr_running; /* Nr of all tasks running in the group */
- unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
+ unsigned int sum_h_nr_running; /* Nr of tasks running in the group */
unsigned int idle_cpus; /* Nr of idle CPUs in the group */
unsigned int group_weight;
enum group_type group_type;
@@ -10076,7 +10079,7 @@ static inline int sg_imbalanced(struct sched_group *group)
* be used by some tasks.
* We consider that a group has spare capacity if the number of task is
* smaller than the number of CPUs or if the utilization is lower than the
- * available capacity for CFS tasks.
+ * available capacity for fairly scheduled tasks.
* For the latter, we use a threshold to stabilize the state, to take into
* account the variance of the tasks' load and to return true if the available
* capacity in meaningful for the load balancer.
@@ -11566,7 +11569,7 @@ static int need_active_balance(struct lb_env *env)
return 1;
/*
- * The dst_cpu is idle and the src_cpu CPU has only 1 CFS task.
+ * The dst_cpu is idle and the src_cpu CPU has only 1 task.
* It's worth migrating the task if the src_cpu's capacity is reduced
* because of other sched_class or IRQs if more capacity stays
* available on dst_cpu.
@@ -12312,7 +12315,7 @@ static void nohz_balancer_kick(struct rq *rq)
sd = rcu_dereference(rq->sd);
if (sd) {
/*
- * If there's a runnable CFS task and the current CPU has reduced
+ * If there's a runnable task and the current CPU has reduced
* capacity, kick the ILB to see if there's a better CPU to run on:
*/
if (rq->cfs.h_nr_running >= 1 && check_cpu_capacity(rq, sd)) {
@@ -12937,7 +12940,7 @@ static inline void task_tick_core(struct rq *rq, struct task_struct *curr)
}
/*
- * se_fi_update - Update the cfs_rq->min_vruntime_fi in a CFS hierarchy if needed.
+ * se_fi_update - Update the cfs_rq->min_vruntime_fi in the hierarchy if needed.
*/
static void se_fi_update(const struct sched_entity *se, unsigned int fi_seq,
bool forceidle)
---
base-commit: 9852d85ec9d492ebef56dc5f229416c925758edc
change-id: 20240625-eevdf-doc-2a1876d94ee4
Best regards,
--
Linus Walleij <linus.walleij@...aro.org>
Powered by blists - more mailing lists