lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aN6bsSW1zSjuDV3q@jlelli-thinkpadt14gen4.remote.csb>
Date: Thu, 2 Oct 2025 16:35:13 +0100
From: Juri Lelli <juri.lelli@...hat.com>
To: Yuri Andriaccio <yurand2000@...il.com>
Cc: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
	Valentin Schneider <vschneid@...hat.com>,
	linux-kernel@...r.kernel.org,
	Luca Abeni <luca.abeni@...tannapisa.it>,
	Yuri Andriaccio <yuri.andriaccio@...tannapisa.it>
Subject: Re: [RFC PATCH v3 05/24] sched/rt: Disable RT_GROUP_SCHED

Hello,

On 29/09/25 11:22, Yuri Andriaccio wrote:
> Disable the old RT_GROUP_SCHED scheduler. Note that this does not completely
> remove all the RT_GROUP_SCHED functionality, just unhooks it and removes most of
> the relevant functions. Some of the RT_GROUP_SCHED functions are kept because
> they will be adapted for the HCBS scheduling.
> 
> Signed-off-by: Yuri Andriaccio <yurand2000@...il.com>
> ---
>  kernel/sched/core.c     |   6 -
>  kernel/sched/deadline.c |  34 --
>  kernel/sched/debug.c    |   6 -
>  kernel/sched/rt.c       | 848 ++--------------------------------------
>  kernel/sched/sched.h    |  11 +-
>  kernel/sched/syscalls.c |  13 -
>  6 files changed, 26 insertions(+), 892 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index ccba6fc3c3f..5791aa1f8c8 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -8721,11 +8721,6 @@ void __init sched_init(void)
>  
>  	init_defrootdomain();
>  
> -#ifdef CONFIG_RT_GROUP_SCHED
> -	init_rt_bandwidth(&root_task_group.rt_bandwidth,
> -			global_rt_period(), global_rt_runtime());
> -#endif /* CONFIG_RT_GROUP_SCHED */
> -
>  #ifdef CONFIG_CGROUP_SCHED
>  	task_group_cache = KMEM_CACHE(task_group, 0);
>  
> @@ -8777,7 +8772,6 @@ void __init sched_init(void)
>  		 * starts working after scheduler_running, which is not the case
>  		 * yet.
>  		 */
> -		rq->rt.rt_runtime = global_rt_runtime();
>  		init_tg_rt_entry(&root_task_group, &rq->rt, NULL, i, NULL);
>  #endif
>  		rq->sd = NULL;
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 6ff00f71041..277fbaff8b5 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1508,40 +1508,6 @@ static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64
>  		if (!is_leftmost(dl_se, &rq->dl))
>  			resched_curr(rq);
>  	}
> -
> -	/*
> -	 * The fair server (sole dl_server) does not account for real-time
> -	 * workload because it is running fair work.
> -	 */
> -	if (dl_se == &rq->fair_server)
> -		return;
> -
> -#ifdef CONFIG_RT_GROUP_SCHED
> -	/*
> -	 * Because -- for now -- we share the rt bandwidth, we need to
> -	 * account our runtime there too, otherwise actual rt tasks
> -	 * would be able to exceed the shared quota.
> -	 *
> -	 * Account to the root rt group for now.
> -	 *
> -	 * The solution we're working towards is having the RT groups scheduled
> -	 * using deadline servers -- however there's a few nasties to figure
> -	 * out before that can happen.
> -	 */
> -	if (rt_bandwidth_enabled()) {
> -		struct rt_rq *rt_rq = &rq->rt;
> -
> -		raw_spin_lock(&rt_rq->rt_runtime_lock);
> -		/*
> -		 * We'll let actual RT tasks worry about the overflow here, we
> -		 * have our own CBS to keep us inline; only account when RT
> -		 * bandwidth is relevant.
> -		 */
> -		if (sched_rt_bandwidth_account(rt_rq))
> -			rt_rq->rt_time += delta_exec;
> -		raw_spin_unlock(&rt_rq->rt_runtime_lock);
> -	}
> -#endif /* CONFIG_RT_GROUP_SCHED */
>  }
>  
>  /*
> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
> index 02e16b70a79..efcf8d82f85 100644
> --- a/kernel/sched/debug.c
> +++ b/kernel/sched/debug.c
> @@ -890,12 +890,6 @@ void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq)
>  
>  	PU(rt_nr_running);
>  
> -#ifdef CONFIG_RT_GROUP_SCHED
> -	P(rt_throttled);
> -	PN(rt_time);
> -	PN(rt_runtime);
> -#endif
> -
>  #undef PN
>  #undef PU
>  #undef P
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index a599f63bf7f..c625ea45ca7 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1,3 +1,4 @@
> +#pragma GCC diagnostic ignored "-Wunused-function"

Uh oh, guess this goes away with later patches? It's anyway not very
nice unfortunately and it also breaks the SPDX licence first
requirement.

>  // SPDX-License-Identifier: GPL-2.0
>  /*
>   * Real-Time Scheduling Class (mapped to the SCHED_FIFO and SCHED_RR

So, this leaves only a skeleton of the current RT_GROUP implementation.
I believe cgroup ABI will still be there, but it won't have an effect.
Since this is part of an atomic all-or-nothing set of changes, maybe
it's OK? Guess people can get confused if for some reason they end up
using a kernel with patches only up to this one. :-)

Best,
Juri


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ