[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250929092221.10947-11-yurand2000@gmail.com>
Date: Mon, 29 Sep 2025 11:22:07 +0200
From: Yuri Andriaccio <yurand2000@...il.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>
Cc: linux-kernel@...r.kernel.org,
Luca Abeni <luca.abeni@...tannapisa.it>,
Yuri Andriaccio <yuri.andriaccio@...tannapisa.it>
Subject: [RFC PATCH v3 10/24] sched/deadline: Account rt-cgroups bandwidth in deadline tasks schedulability tests.
From: luca abeni <luca.abeni@...tannapisa.it>
Account the rt-cgroups hierarchy's reserved bandwidth in the schedulability
test of deadline entities. This mechanism allows to completely reserve portion
of the rt-bandwidth to rt-cgroups even if they do not use all of it.
Account for the rt-cgroups' reserved bandwidth also when changing the total
dedicated bandwidth for real time tasks.
Co-developed-by: Alessio Balsini <a.balsini@...up.it>
Signed-off-by: Alessio Balsini <a.balsini@...up.it>
Co-developed-by: Andrea Parri <parri.andrea@...il.com>
Signed-off-by: Andrea Parri <parri.andrea@...il.com>
Co-developed-by: Yuri Andriaccio <yurand2000@...il.com>
Signed-off-by: Yuri Andriaccio <yurand2000@...il.com>
Signed-off-by: luca abeni <luca.abeni@...tannapisa.it>
---
kernel/sched/deadline.c | 21 ++++++++++++++++++---
1 file changed, 18 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 34a1494d782..754bfe231b4 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -213,8 +213,15 @@ void __dl_add(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
static inline bool
__dl_overflow(struct dl_bw *dl_b, unsigned long cap, u64 old_bw, u64 new_bw)
{
+ u64 dl_groups_root = 0;
+
+#ifdef CONFIG_RT_GROUP_SCHED
+ dl_groups_root = to_ratio(root_task_group.dl_bandwidth.dl_period,
+ root_task_group.dl_bandwidth.dl_runtime);
+#endif
return dl_b->bw != -1 &&
- cap_scale(dl_b->bw, cap) < dl_b->total_bw - old_bw + new_bw;
+ cap_scale(dl_b->bw, cap) < dl_b->total_bw - old_bw + new_bw
+ + cap_scale(dl_groups_root, cap);
}
static inline
@@ -3150,10 +3157,16 @@ int sched_dl_global_validate(void)
u64 period = global_rt_period();
u64 new_bw = to_ratio(period, runtime);
u64 cookie = ++dl_cookie;
+ u64 dl_groups_root = 0;
struct dl_bw *dl_b;
- int cpu, cpus, ret = 0;
+ int cpu, cap, cpus, ret = 0;
unsigned long flags;
+#ifdef CONFIG_RT_GROUP_SCHED
+ dl_groups_root = to_ratio(root_task_group.dl_bandwidth.dl_period,
+ root_task_group.dl_bandwidth.dl_runtime);
+#endif
+
/*
* Here we want to check the bandwidth not being set to some
* value smaller than the currently allocated bandwidth in
@@ -3166,10 +3179,12 @@ int sched_dl_global_validate(void)
goto next;
dl_b = dl_bw_of(cpu);
+ cap = dl_bw_capacity(cpu);
cpus = dl_bw_cpus(cpu);
raw_spin_lock_irqsave(&dl_b->lock, flags);
- if (new_bw * cpus < dl_b->total_bw)
+ if (new_bw * cpus < dl_b->total_bw +
+ cap_scale(dl_groups_root, cap))
ret = -EBUSY;
raw_spin_unlock_irqrestore(&dl_b->lock, flags);
--
2.51.0
Powered by blists - more mailing lists