lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241113125724.450249-2-juri.lelli@redhat.com>
Date: Wed, 13 Nov 2024 12:57:22 +0000
From: Juri Lelli <juri.lelli@...hat.com>
To: Waiman Long <longman@...hat.com>,
	Tejun Heo <tj@...nel.org>,
	Johannes Weiner <hannes@...xchg.org>,
	Michal Koutny <mkoutny@...e.com>,
	Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Ben Segall <bsegall@...gle.com>,
	Mel Gorman <mgorman@...e.de>,
	Valentin Schneider <vschneid@...hat.com>
Cc: Qais Yousef <qyousef@...alina.io>,
	Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
	"Joel Fernandes (Google)" <joel@...lfernandes.org>,
	Suleiman Souhlal <suleiman@...gle.com>,
	Aashish Sharma <shraash@...gle.com>,
	Shin Kawamura <kawasin@...gle.com>,
	Vineeth Remanan Pillai <vineeth@...byteword.org>,
	linux-kernel@...r.kernel.org,
	cgroups@...r.kernel.org,
	Juri Lelli <juri.lelli@...hat.com>
Subject: [PATCH 1/2] sched/deadline: Restore dl_server bandwidth on non-destructive root domain changes

When root domain non-destructive changes (e.g., only modifying one of
the existing root domains while the rest is not touched) happen we still
need to clear DEADLINE bandwidth accounting so that it's then properly
restore taking into account DEADLINE tasks associated to each cpuset
(associated to each root domain). After the introduction of dl_servers,
we fail to restore such servers contribution after non-destructive
changes (as they are only considered on destructive changes when
runqueues are attached to the new domains).

Fix this by making sure we iterate over the dl_server attached to
domains that have not been destroyed and add them bandwidth contribution
back correctly.

Signed-off-by: Juri Lelli <juri.lelli@...hat.com>
---
 include/linux/sched/deadline.h |  2 +-
 kernel/cgroup/cpuset.c         |  2 +-
 kernel/sched/deadline.c        | 18 +++++++++++++-----
 kernel/sched/topology.c        | 10 ++++++----
 4 files changed, 21 insertions(+), 11 deletions(-)

diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
index 3a912ab42bb5..82c966a55856 100644
--- a/include/linux/sched/deadline.h
+++ b/include/linux/sched/deadline.h
@@ -33,7 +33,7 @@ static inline bool dl_time_before(u64 a, u64 b)
 
 struct root_domain;
 extern void dl_add_task_root_domain(struct task_struct *p);
-extern void dl_clear_root_domain(struct root_domain *rd);
+extern void dl_clear_root_domain(struct root_domain *rd, bool restore);
 
 #endif /* CONFIG_SMP */
 
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 142303abb055..4d3603a99db3 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -954,7 +954,7 @@ static void dl_rebuild_rd_accounting(void)
 	 * Clear default root domain DL accounting, it will be computed again
 	 * if a task belongs to it.
 	 */
-	dl_clear_root_domain(&def_root_domain);
+	dl_clear_root_domain(&def_root_domain, false);
 
 	cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) {
 
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 9ce93d0bf452..e53208a50279 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2968,13 +2968,21 @@ void dl_add_task_root_domain(struct task_struct *p)
 	task_rq_unlock(rq, p, &rf);
 }
 
-void dl_clear_root_domain(struct root_domain *rd)
+void dl_clear_root_domain(struct root_domain *rd, bool restore)
 {
-	unsigned long flags;
-
-	raw_spin_lock_irqsave(&rd->dl_bw.lock, flags);
+	guard(raw_spinlock_irqsave)(&rd->dl_bw.lock);
 	rd->dl_bw.total_bw = 0;
-	raw_spin_unlock_irqrestore(&rd->dl_bw.lock, flags);
+
+	if (restore) {
+		int i;
+
+		for_each_cpu(i, rd->span) {
+			struct sched_dl_entity *dl_se = &cpu_rq(i)->fair_server;
+
+			if (dl_server(dl_se))
+				rd->dl_bw.total_bw += dl_se->dl_bw;
+		}
+	}
 }
 
 #endif /* CONFIG_SMP */
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 9748a4c8d668..e9e7a7c43dd6 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2721,12 +2721,14 @@ void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[],
 
 				/*
 				 * This domain won't be destroyed and as such
-				 * its dl_bw->total_bw needs to be cleared.  It
-				 * will be recomputed in function
-				 * update_tasks_root_domain().
+				 * its dl_bw->total_bw needs to be cleared.
+				 * Tasks contribution will be then recomputed
+				 * in function dl_update_tasks_root_domain(),
+				 * dl_servers contribution in function
+				 * dl_restore_server_root_domain().
 				 */
 				rd = cpu_rq(cpumask_any(doms_cur[i]))->rd;
-				dl_clear_root_domain(rd);
+				dl_clear_root_domain(rd, true);
 				goto match1;
 			}
 		}
-- 
2.47.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ