[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251208092744.32737-25-kprateek.nayak@amd.com>
Date: Mon, 8 Dec 2025 09:27:11 +0000
From: K Prateek Nayak <kprateek.nayak@....com>
To: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot
<vincent.guittot@...aro.org>, Anna-Maria Behnsen <anna-maria@...utronix.de>,
Frederic Weisbecker <frederic@...nel.org>, Thomas Gleixner
<tglx@...utronix.de>
CC: <linux-kernel@...r.kernel.org>, Dietmar Eggemann
<dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, Ben Segall
<bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, Valentin Schneider
<vschneid@...hat.com>, K Prateek Nayak <kprateek.nayak@....com>, "Gautham R.
Shenoy" <gautham.shenoy@....com>, Swapnil Sapkal <swapnil.sapkal@....com>,
Shrikanth Hegde <sshegde@...ux.ibm.com>, Chen Yu <yu.c.chen@...el.com>
Subject: [RESEND RFC PATCH v2 25/29] sched/topology: Add basic debug information for "nohz_shared_list"
Introduce debug_nohz_shared_list_update() to count the number of entries
in "nohz_shared_list" after each list modification.
XXX: There isn't a great way to jump from a sched_domain_shared object
to the sched_domain struct that references it which prevents printing
more information about the sched domain that was linked with the
shared object.
Signed-off-by: K Prateek Nayak <kprateek.nayak@....com>
---
kernel/sched/topology.c | 19 ++++++++++++++++++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index ec549fb7d7fc..738e6084d5be 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -471,6 +471,20 @@ DEFINE_PER_CPU(struct sched_domain __rcu *, sd_nohz);
static DEFINE_RAW_SPINLOCK(nohz_shared_list_lock);
LIST_HEAD(nohz_shared_list);
+static void debug_nohz_shared_list_update(void)
+{
+ struct sched_domain_shared *sds;
+ int count = 0;
+
+ if (!sched_debug())
+ return;
+
+ list_for_each_entry(sds, &nohz_shared_list, nohz_list_node)
+ count++;
+
+ pr_info("%s: %d nohz_shared_list entries found.\n", __func__, count);
+}
+
static int __sds_nohz_idle_alloc_init(struct sched_domain_shared *sds, int node)
{
sds->nohz_list_node = (struct list_head)LIST_HEAD_INIT(sds->nohz_list_node);
@@ -588,6 +602,7 @@ static void update_nohz_domain(int cpu)
guard(raw_spinlock)(&nohz_shared_list_lock);
list_add(&sds->nohz_list_node, &nohz_shared_list);
+ debug_nohz_shared_list_update();
}
WARN_ON_ONCE(sd && !sds);
@@ -612,8 +627,10 @@ static int sds_delayed_free(struct sched_domain_shared *sds)
if (list_empty(&sds->nohz_list_node))
return 0;
- scoped_guard(raw_spinlock_irqsave, &nohz_shared_list_lock)
+ scoped_guard(raw_spinlock_irqsave, &nohz_shared_list_lock) {
list_del_rcu(&sds->nohz_list_node);
+ debug_nohz_shared_list_update();
+ }
__nohz_exit_idle_tracking(sds);
call_rcu(&sds->rcu, destroy_sched_domain_shared_rcu);
--
2.43.0
Powered by blists - more mailing lists