lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aXCom_OANuM4WP_E@jlelli-thinkpadt14gen4.remote.csb>
Date: Wed, 21 Jan 2026 11:21:15 +0100
From: Juri Lelli <juri.lelli@...hat.com>
To: Yuri Andriaccio <yurand2000@...il.com>
Cc: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
	Valentin Schneider <vschneid@...hat.com>,
	linux-kernel@...r.kernel.org,
	Luca Abeni <luca.abeni@...tannapisa.it>,
	Yuri Andriaccio <yuri.andriaccio@...tannapisa.it>
Subject: Re: [RFC PATCH v4 14/28] sched/rt: Update rt-cgroup schedulability
 checks

Hello,

On 01/12/25 13:41, Yuri Andriaccio wrote:
> From: luca abeni <luca.abeni@...tannapisa.it>
> 
> Update sched_group_rt_runtime/period and sched_group_set_rt_runtime/period
> to use the newly defined data structures and perform necessary checks to
> update both the runtime and period of a given group.
> 
> The set functions call tg_set_rt_bandwidth() which is also updated:
> - Use the newly added HCBS dl_bandwidth structure instead of rt_bandwidth.
> - Update __rt_schedulable() to check for numerical issues:
>   - Prevent a non-zero runtime that is too small, since a non-zero very
>     small runtime will make the servers behave as they had zero runtime.
>   - Since some computation use signed integers, the period might be so
>     big that when read as a signed integer becomes a negative number, and
>     we don't want that. If the period satisfies this prerequisite, also
>     the runtime will do, since the runtime is always less than or equal
>     to the period.
> - Update tg_rt_schedulable(), used when walking the cgroup tree to check
>   if all invariants are met:
>   - Update most of the instructions to obtain data from the newly added
>     data structures (dl_bandwidth).
>   - If the task group is the root group, run a total bandwidth check with
>     the newly added dl_check_tg() function.
> - After all checks are successful, if the changed group is not the root
>   cgroup, update the assigned runtime and period to all the local
>   deadline servers.
> - Additionally use a mutex guard instead of manually locking/unlocking.
> 
> Add dl_check_tg(), which performs an admission control test similar to
> __dl_overflow, but this time we are updating the cgroup's total bandwidth
> rather than scheduling a new DEADLINE task or updating a non-cgroup
> deadline server.
> 
> Finally, prevent creation of a cgroup hierarchy with depth greater than
> two, as this will be addressed in a future patch. A depth two hierarchy
> is sufficient for now for testing the patchset.
> 
> Co-developed-by: Alessio Balsini <a.balsini@...up.it>
> Signed-off-by: Alessio Balsini <a.balsini@...up.it>
> Co-developed-by: Andrea Parri <parri.andrea@...il.com>
> Signed-off-by: Andrea Parri <parri.andrea@...il.com>
> Co-developed-by: Yuri Andriaccio <yurand2000@...il.com>
> Signed-off-by: Yuri Andriaccio <yurand2000@...il.com>
> Signed-off-by: luca abeni <luca.abeni@...tannapisa.it>
> ---

...

>  #ifdef CONFIG_RT_GROUP_SCHED
> +int dl_check_tg(unsigned long total)
> +{
> +	unsigned long flags;
> +	int which_cpu;
> +	int cap;
> +	struct dl_bw *dl_b;
> +	u64 gen = ++dl_cookie;
> +
> +	for_each_possible_cpu(which_cpu) {
> +		rcu_read_lock_sched();
> +
> +		if (!dl_bw_visited(which_cpu, gen)) {
> +			cap = dl_bw_capacity(which_cpu);
> +			dl_b = dl_bw_of(which_cpu);
> +
> +			raw_spin_lock_irqsave(&dl_b->lock, flags);
> +
> +			if (dl_b->bw != -1 &&
> +			    cap_scale(dl_b->bw, cap) < dl_b->total_bw + cap_scale(total, cap)) {
> +				raw_spin_unlock_irqrestore(&dl_b->lock, flags);
> +				rcu_read_unlock_sched();
> +
> +				return 0;
> +			}
> +
> +			raw_spin_unlock_irqrestore(&dl_b->lock, flags);
> +		}
> +
> +		rcu_read_unlock_sched();

I believe we can use lock guards in the above?

...

> @@ -2108,6 +2107,20 @@ static int __rt_schedulable(struct task_group *tg, u64 period, u64 runtime)
>  		.rt_runtime = runtime,
>  	};
>  
> +	/*
> +	 * Since we truncate DL_SCALE bits, make sure we're at least
> +	 * that big.
> +	 */
> +	if (runtime != 0 && runtime < (1ULL << DL_SCALE))
> +		return -EINVAL;
> +
> +	/*
> +	 * Since we use the MSB for wrap-around and sign issues, make
> +	 * sure it's not set (mind that period can be equal to zero).
> +	 */
> +	if (period & (1ULL << 63))
> +		return -EINVAL;
> +

This is the same as in __checkparam_dl(), is it? Maybe we can create an
helper?

Thanks,
Juri


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ