lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1dd8bee2-e176-466f-afdd-b19b067a480c@arm.com>
Date: Mon, 23 Dec 2024 15:12:03 +0000
From: Luis Machado <luis.machado@....com>
To: Linus Walleij <linus.walleij@...aro.org>, Ingo Molnar <mingo@...hat.com>,
 Peter Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>,
 Vincent Guittot <vincent.guittot@...aro.org>,
 Dietmar Eggemann <dietmar.eggemann@....com>,
 Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
 Mel Gorman <mgorman@...e.de>, Daniel Bristot de Oliveira
 <bristot@...hat.com>, Valentin Schneider <vschneid@...hat.com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] sched/eevdf: Fix documentation to refer to EEVDF

Hi,

On 12/20/24 19:28, Linus Walleij wrote:
> The references to "CFS" is a bit misleading these days since
> the scheduling principe is EEVDF.
> 
> Rewrite the top level comment, and trim other comments and
> kerneldoc to implicitly refer to the scheduling implemented
> in this file, or just "the fair scheduler".
> 
> Signed-off-by: Linus Walleij <linus.walleij@...aro.org>
> ---
> Changes in v3:
> - Rebase on v6.13-rc1
> - Link to v2: https://lore.kernel.org/r/20241031-eevdf-doc-v2-1-8de4ed583f67@linaro.org
> 
> Changes in v2:
> - Rebase on v6.12-rc1
> - Tweak subject to make it more to the point
> - Link to v1: https://lore.kernel.org/r/20240625-eevdf-doc-v1-1-215da9eb9354@linaro.org
> ---
>  kernel/sched/fair.c | 31 +++++++++++++++++--------------
>  1 file changed, 17 insertions(+), 14 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index fbdca89c677f416bc911d63b25372b7e3df5de8f..437134247c60b9ad45630455b8213a2354ae804c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1,6 +1,8 @@
>  // SPDX-License-Identifier: GPL-2.0
>  /*
> - * Completely Fair Scheduling (CFS) Class (SCHED_NORMAL/SCHED_BATCH)
> + * Earliest Elegible Deadline First (EEVDF) Class (SCHED_NORMAL/SCHED_BATCH)

A little pedantic, but s/Deadline/Virtual Deadline.

> + * also known as the fair time-sharing scheduler, refactored from the
> + * Completely Fair Scheduler (CFS).
>   *
>   *  Copyright (C) 2007 Red Hat, Inc., Ingo Molnar <mingo@...hat.com>
>   *
> @@ -17,7 +19,8 @@
>   *  Scaled math optimizations by Thomas Gleixner
>   *  Copyright (C) 2007, Thomas Gleixner <tglx@...utronix.de>
>   *
> - *  Adaptive scheduling granularity, math enhancements by Peter Zijlstra
> + *  Adaptive scheduling granularity, math enhancements and rewrite to EEVDF
> + *  by Peter Zijlstra
>   *  Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra
>   */
>  #include <linux/energy_model.h>
> @@ -297,7 +300,7 @@ static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se)
>  const struct sched_class fair_sched_class;
>  
>  /**************************************************************
> - * CFS operations on generic schedulable entities:
> + * Operations on generic schedulable entities:
>   */
>  
>  #ifdef CONFIG_FAIR_GROUP_SCHED
> @@ -5686,7 +5689,7 @@ entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued)
>  
>  
>  /**************************************************
> - * CFS bandwidth control machinery
> + * Bandwidth control machinery
>   */
>  
>  #ifdef CONFIG_CFS_BANDWIDTH
> @@ -6800,7 +6803,7 @@ static inline void sched_fair_update_stop_tick(struct rq *rq, struct task_struct
>  #endif
>  
>  /**************************************************
> - * CFS operations on tasks:
> + * Operations on tasks:
>   */
>  
>  #ifdef CONFIG_SCHED_HRTICK
> @@ -7962,7 +7965,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>  }
>  
>  /**
> - * cpu_util() - Estimates the amount of CPU capacity used by CFS tasks.
> + * cpu_util() - Estimates the amount of CPU capacity used by tasks.
>   * @cpu: the CPU to get the utilization for
>   * @p: task for which the CPU utilization should be predicted or NULL
>   * @dst_cpu: CPU @p migrates to, -1 if @p moves from @cpu or @p == NULL
> @@ -7973,7 +7976,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>   *
>   * CPU utilization is the sum of running time of runnable tasks plus the
>   * recent utilization of currently non-runnable tasks on that CPU.
> - * It represents the amount of CPU capacity currently used by CFS tasks in
> + * It represents the amount of CPU capacity currently used by tasks in
>   * the range [0..max CPU capacity] with max CPU capacity being the CPU
>   * capacity at f_max.
>   *
> @@ -7985,7 +7988,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>   * of such a task would be significantly decayed at this point of time.
>   *
>   * Boosted CPU utilization is defined as max(CPU runnable, CPU utilization).
> - * CPU contention for CFS tasks can be detected by CPU runnable > CPU
> + * CPU contention for tasks can be detected by CPU runnable > CPU
>   * utilization. Boosting is implemented in cpu_util() so that internal
>   * users (e.g. EAS) can use it next to external users (e.g. schedutil),
>   * latter via cpu_util_cfs_boost().
> @@ -9725,7 +9728,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
>  
>  	/*
>  	 * update_load_avg() can call cpufreq_update_util(). Make sure that RT,
> -	 * DL and IRQ signals have been updated before updating CFS.
> +	 * DL and IRQ signals have been updated before updating the scheduler.
>  	 */
>  	updated = update_other_load_avgs(rq);
>  
> @@ -9876,7 +9879,7 @@ struct sg_lb_stats {
>  	unsigned long group_util;		/* Total utilization   over the CPUs of the group */
>  	unsigned long group_runnable;		/* Total runnable time over the CPUs of the group */
>  	unsigned int sum_nr_running;		/* Nr of all tasks running in the group */
> -	unsigned int sum_h_nr_running;		/* Nr of CFS tasks running in the group */
> +	unsigned int sum_h_nr_running;		/* Nr of tasks running in the group */
>  	unsigned int idle_cpus;                 /* Nr of idle CPUs         in the group */
>  	unsigned int group_weight;
>  	enum group_type group_type;
> @@ -10080,7 +10083,7 @@ static inline int sg_imbalanced(struct sched_group *group)
>   * be used by some tasks.
>   * We consider that a group has spare capacity if the number of task is
>   * smaller than the number of CPUs or if the utilization is lower than the
> - * available capacity for CFS tasks.
> + * available capacity for fairly scheduled tasks.
>   * For the latter, we use a threshold to stabilize the state, to take into
>   * account the variance of the tasks' load and to return true if the available
>   * capacity in meaningful for the load balancer.
> @@ -11570,7 +11573,7 @@ static int need_active_balance(struct lb_env *env)
>  		return 1;
>  
>  	/*
> -	 * The dst_cpu is idle and the src_cpu CPU has only 1 CFS task.
> +	 * The dst_cpu is idle and the src_cpu CPU has only 1 task.
>  	 * It's worth migrating the task if the src_cpu's capacity is reduced
>  	 * because of other sched_class or IRQs if more capacity stays
>  	 * available on dst_cpu.
> @@ -12316,7 +12319,7 @@ static void nohz_balancer_kick(struct rq *rq)
>  	sd = rcu_dereference(rq->sd);
>  	if (sd) {
>  		/*
> -		 * If there's a runnable CFS task and the current CPU has reduced
> +		 * If there's a runnable task and the current CPU has reduced
>  		 * capacity, kick the ILB to see if there's a better CPU to run on:
>  		 */
>  		if (rq->cfs.h_nr_running >= 1 && check_cpu_capacity(rq, sd)) {
> @@ -12941,7 +12944,7 @@ static inline void task_tick_core(struct rq *rq, struct task_struct *curr)
>  }
>  
>  /*
> - * se_fi_update - Update the cfs_rq->min_vruntime_fi in a CFS hierarchy if needed.
> + * se_fi_update - Update the cfs_rq->min_vruntime_fi in the hierarchy if needed.
>   */
>  static void se_fi_update(const struct sched_entity *se, unsigned int fi_seq,
>  			 bool forceidle)
> 
> ---
> base-commit: 40384c840ea1944d7c5a392e8975ed088ecf0b37
> change-id: 20240625-eevdf-doc-2a1876d94ee4
> 
> Best regards,

For all/most of these CFS entries, I wonder if we still want to mention it is a fair scheduler
as opposed to explicitly mentioning the kind of algorithm (CFS x EEVDF).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ