[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <66943c82-2cfd-351b-7f36-5aefdb196a03@arm.com>
Date: Fri, 3 Jan 2020 12:14:35 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: "Zengtao (B)" <prime.zeng@...ilicon.com>,
Sudeep Holla <sudeep.holla@....com>
Cc: Linuxarm <linuxarm@...wei.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Morten Rasmussen <morten.rasmussen@....com>
Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations conflicts with
lower layer
On 03/01/2020 10:57, Valentin Schneider wrote:
> I'm juggling with other things atm, but let me have a think and see if we
> couldn't detect that in the scheduler itself.
>
Something like this ought to catch your case; might need to compare group
spans rather than pure group pointers.
---
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 6ec1e595b1d4..c4151e11afcd 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1120,6 +1120,13 @@ build_sched_groups(struct sched_domain *sd, int cpu)
sg = get_group(i, sdd);
+ /* sg's are inited as self-looping. If 'last' is not self
+ * looping, we set it in a previous visit. No further visit
+ * should change the link order, if we do then the topology
+ * description is terminally broken.
+ */
+ BUG_ON(last && last->next != last && last->next != sg);
+
cpumask_or(covered, covered, sched_group_span(sg));
if (!first)
Powered by blists - more mailing lists