[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251201124205.11169-26-yurand2000@gmail.com>
Date: Mon, 1 Dec 2025 13:41:58 +0100
From: Yuri Andriaccio <yurand2000@...il.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>
Cc: linux-kernel@...r.kernel.org,
Luca Abeni <luca.abeni@...tannapisa.it>,
Yuri Andriaccio <yuri.andriaccio@...tannapisa.it>
Subject: [RFC PATCH v4 25/28] sched/core: Execute enqueued balance callbacks when migrating task betweeen cgroups
Execute balancing callbacks when migrating task between cgroups, since
the HCBS scheduler, similarly to the previous patch, may request balancing
of throttled dl_servers to fully utilize the server's bandwidth.
Introduce the RELEASE_LOCK helper macro to explicitly unlock a guard-based
lock. The macro calls the destructor function of the lock and invalidates
it so that the lock is not unlocked twice when the lock variable goes out
of scope.
Signed-off-by: Yuri Andriaccio <yurand2000@...il.com>
---
include/linux/cleanup.h | 3 +++
kernel/sched/core.c | 7 +++++++
2 files changed, 10 insertions(+)
diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h
index 2573585b7f..65c222b308 100644
--- a/include/linux/cleanup.h
+++ b/include/linux/cleanup.h
@@ -518,4 +518,7 @@ __DEFINE_LOCK_GUARD_0(_name, _lock)
#define DEFINE_LOCK_GUARD_1_COND(X...) CONCATENATE(DEFINE_LOCK_GUARD_1_COND_, COUNT_ARGS(X))(X)
+#define RELEASE_LOCK(_name, _var) \
+ ({ class_##_name##_destructor(&_var); no_free_ptr(_var.lock); })
+
#endif /* _LINUX_CLEANUP_H */
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2c08f31d3d..f69480243e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -9223,6 +9223,7 @@ void sched_move_task(struct task_struct *tsk, bool for_autogroup)
{
int queued, running, queue_flags =
DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
+ struct balance_callback *head;
struct rq *rq;
CLASS(task_rq_lock, rq_guard)(tsk);
@@ -9253,6 +9254,12 @@ void sched_move_task(struct task_struct *tsk, bool for_autogroup)
*/
resched_curr(rq);
}
+
+ preempt_disable();
+ head = splice_balance_callbacks(rq);
+ RELEASE_LOCK(task_rq_lock, rq_guard);
+ balance_callbacks(rq, head);
+ preempt_enable();
}
static struct cgroup_subsys_state *
--
2.51.0
Powered by blists - more mailing lists