[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250929092221.10947-7-yurand2000@gmail.com>
Date: Mon, 29 Sep 2025 11:22:03 +0200
From: Yuri Andriaccio <yurand2000@...il.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>
Cc: linux-kernel@...r.kernel.org,
Luca Abeni <luca.abeni@...tannapisa.it>,
Yuri Andriaccio <yuri.andriaccio@...tannapisa.it>
Subject: [RFC PATCH v3 06/24] sched/rt: Introduce HCBS specific structs in task_group
From: luca abeni <luca.abeni@...tannapisa.it>
Each task_group manages a number of new objects:
- a sched_dl_entity/dl_server for each CPU
- a dl_bandwidth object to keep track of its allocated dl_bandwidth
Co-developed-by: Alessio Balsini <a.balsini@...up.it>
Signed-off-by: Alessio Balsini <a.balsini@...up.it>
Co-developed-by: Andrea Parri <parri.andrea@...il.com>
Signed-off-by: Andrea Parri <parri.andrea@...il.com>
Co-developed-by: Yuri Andriaccio <yurand2000@...il.com>
Signed-off-by: Yuri Andriaccio <yurand2000@...il.com>
Signed-off-by: luca abeni <luca.abeni@...tannapisa.it>
---
kernel/sched/sched.h | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 2d373a1ba67..59a154505d8 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -319,6 +319,13 @@ struct rt_bandwidth {
unsigned int rt_period_active;
};
+struct dl_bandwidth {
+ raw_spinlock_t dl_runtime_lock;
+ u64 dl_runtime;
+ u64 dl_period;
+};
+
+
static inline int dl_bandwidth_enabled(void)
{
return sysctl_sched_rt_runtime >= 0;
@@ -467,10 +474,17 @@ struct task_group {
#endif /* CONFIG_FAIR_GROUP_SCHED */
#ifdef CONFIG_RT_GROUP_SCHED
+ /*
+ * Each task group manages a different scheduling entity per CPU, i.e. a
+ * different deadline server, and a runqueue per CPU. All the dl-servers
+ * share the same dl_bandwidth object.
+ */
struct sched_rt_entity **rt_se;
+ struct sched_dl_entity **dl_se;
struct rt_rq **rt_rq;
struct rt_bandwidth rt_bandwidth;
+ struct dl_bandwidth dl_bandwidth;
#endif
struct scx_task_group scx;
@@ -817,12 +831,12 @@ struct rt_rq {
raw_spinlock_t rt_runtime_lock;
unsigned int rt_nr_boosted;
-
- struct rq *rq; /* this is always top-level rq, cache? */
#endif
#ifdef CONFIG_CGROUP_SCHED
struct task_group *tg; /* this tg has "this" rt_rq on given CPU for runnable entities */
#endif
+
+ struct rq *rq; /* cgroup's runqueue if the rt_rq entity belongs to a cgroup, otherwise top-level rq */
};
static inline bool rt_rq_is_runnable(struct rt_rq *rt_rq)
--
2.51.0
Powered by blists - more mailing lists