[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <173313808391.412.7201088170374631569.tip-bot2@tip-bot2>
Date: Mon, 02 Dec 2024 11:14:43 -0000
From: "tip-bot2 for Juri Lelli" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Juri Lelli <juri.lelli@...hat.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, Phil Auld <pauld@...hat.com>,
Waiman Long <longman@...hat.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/core] sched/deadline: Restore dl_server bandwidth on
non-destructive root domain changes
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 41d4200b7103152468552ee50998cda914102049
Gitweb: https://git.kernel.org/tip/41d4200b7103152468552ee50998cda914102049
Author: Juri Lelli <juri.lelli@...hat.com>
AuthorDate: Thu, 14 Nov 2024 14:28:09
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Mon, 02 Dec 2024 12:01:30 +01:00
sched/deadline: Restore dl_server bandwidth on non-destructive root domain changes
When root domain non-destructive changes (e.g., only modifying one of
the existing root domains while the rest is not touched) happen we still
need to clear DEADLINE bandwidth accounting so that it's then properly
restored, taking into account DEADLINE tasks associated to each cpuset
(associated to each root domain). After the introduction of dl_servers,
we fail to restore such servers contribution after non-destructive
changes (as they are only considered on destructive changes when
runqueues are attached to the new domains).
Fix this by making sure we iterate over the dl_servers attached to
domains that have not been destroyed and add their bandwidth
contribution back correctly.
Signed-off-by: Juri Lelli <juri.lelli@...hat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Phil Auld <pauld@...hat.com>
Tested-by: Waiman Long <longman@...hat.com>
Link: https://lore.kernel.org/r/20241114142810.794657-2-juri.lelli@redhat.com
---
kernel/sched/deadline.c | 17 ++++++++++++++---
kernel/sched/topology.c | 8 +++++---
2 files changed, 19 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index db47f33..ff68ce4 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2960,11 +2960,22 @@ void dl_add_task_root_domain(struct task_struct *p)
void dl_clear_root_domain(struct root_domain *rd)
{
- unsigned long flags;
+ int i;
- raw_spin_lock_irqsave(&rd->dl_bw.lock, flags);
+ guard(raw_spinlock_irqsave)(&rd->dl_bw.lock);
rd->dl_bw.total_bw = 0;
- raw_spin_unlock_irqrestore(&rd->dl_bw.lock, flags);
+
+ /*
+ * dl_server bandwidth is only restored when CPUs are attached to root
+ * domains (after domains are created or CPUs moved back to the
+ * default root doamin).
+ */
+ for_each_cpu(i, rd->span) {
+ struct sched_dl_entity *dl_se = &cpu_rq(i)->fair_server;
+
+ if (dl_server(dl_se) && cpu_active(i))
+ rd->dl_bw.total_bw += dl_se->dl_bw;
+ }
}
#endif /* CONFIG_SMP */
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 9748a4c..9c405f0 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2721,9 +2721,11 @@ void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[],
/*
* This domain won't be destroyed and as such
- * its dl_bw->total_bw needs to be cleared. It
- * will be recomputed in function
- * update_tasks_root_domain().
+ * its dl_bw->total_bw needs to be cleared.
+ * Tasks contribution will be then recomputed
+ * in function dl_update_tasks_root_domain(),
+ * dl_servers contribution in function
+ * dl_restore_server_root_domain().
*/
rd = cpu_rq(cpumask_any(doms_cur[i]))->rd;
dl_clear_root_domain(rd);
Powered by blists - more mailing lists