lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53b65a17a8900cbe5b7e42e599390d62434205d8.camel@linux.intel.com>
Date:   Tue, 11 Jul 2023 09:32:54 -0700
From:   Tim Chen <tim.c.chen@...ux.intel.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Ricardo Neri <ricardo.neri@...el.com>,
        "Ravi V . Shankar" <ravi.v.shankar@...el.com>,
        Ben Segall <bsegall@...gle.com>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Len Brown <len.brown@...el.com>, Mel Gorman <mgorman@...e.de>,
        "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Valentin Schneider <vschneid@...hat.com>,
        Ionela Voinescu <ionela.voinescu@....com>, x86@...nel.org,
        linux-kernel@...r.kernel.org,
        Shrikanth Hegde <sshegde@...ux.vnet.ibm.com>,
        Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
        naveen.n.rao@...ux.vnet.ibm.com,
        Yicong Yang <yangyicong@...ilicon.com>,
        Barry Song <v-songbaohua@...o.com>,
        Chen Yu <yu.c.chen@...el.com>, Hillf Danton <hdanton@...a.com>
Subject: Re: [Patch v3 2/6] sched/topology: Record number of cores in sched
 group

On Tue, 2023-07-11 at 13:31 +0200, Peter Zijlstra wrote:
> On Mon, Jul 10, 2023 at 03:40:34PM -0700, Tim Chen wrote:
> > On Fri, 2023-07-07 at 15:57 -0700, Tim Chen wrote:
> > > From: Tim C Chen <tim.c.chen@...ux.intel.com>
> > > 
> > > When balancing sibling domains that have different number of cores,
> > > tasks in respective sibling domain should be proportional to the number
> > > of cores in each domain. In preparation of implementing such a policy,
> > > record the number of tasks in a scheduling group.
> > 
> > Caught a typo.  Should be "the number of cores" instead of
> > "the number of tasks" in a scheduling group.
> > 
> > Peter, should I send you another patch with the corrected commit log?
> 
> I'll fix it up, already had to fix the patch because due to robot
> finding a compile fail for SCHED_SMT=n builds.
> 
> 
> 
> > > @@ -1275,14 +1275,22 @@ build_sched_groups(struct sched_domain *sd, int cpu)
> > >  static void init_sched_groups_capacity(int cpu, struct sched_domain *sd)
> > >  {
> > >  	struct sched_group *sg = sd->groups;
> > > +	struct cpumask *mask = sched_domains_tmpmask2;
> > >  
> > >  	WARN_ON(!sg);
> > >  
> > >  	do {
> > > -		int cpu, max_cpu = -1;
> > > +		int cpu, cores = 0, max_cpu = -1;
> > >  
> > >  		sg->group_weight = cpumask_weight(sched_group_span(sg));
> > >  
> > > +		cpumask_copy(mask, sched_group_span(sg));
> > > +		for_each_cpu(cpu, mask) {
> > > +			cores++;
> #ifdef CONFIG_SCHED_SMT
> > > +			cpumask_andnot(mask, mask, cpu_smt_mask(cpu));
> #else
> 			__cpumask_clear_cpu(cpu, mask);

Thanks for fixing up the non SCHED_SMT.

I think "__cpumask_clear_cpu(cpu, mask);" can be removed.

Since we have already considered the CPU in the iterator, clearing it
is unnecessay.  So effectively

for_each_cpu(cpu, mask) {
	cores++;
}

should be good enough for the non SCHED_SMT case.  

Or replace the patch with the patch below so we don't
have #ifdef in the middle of code body.  Either way
is fine.

---

>From 9f19714db69739a7985e46bc1f8334d70a69cf2e Mon Sep 17 00:00:00 2001
Message-Id: <9f19714db69739a7985e46bc1f8334d70a69cf2e.1689092923.git.tim.c.chen@...ux.intel.com>
In-Reply-To: <cover.1689092923.git.tim.c.chen@...ux.intel.com>
References: <cover.1689092923.git.tim.c.chen@...ux.intel.com>
From: Tim C Chen <tim.c.chen@...ux.intel.com>
Date: Wed, 17 May 2023 09:09:54 -0700
Subject: [Patch v3 2/6] sched/topology: Record number of cores in sched group
To: Peter Zijlstra <peterz@...radead.org>
Cc: Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot <vincent.guittot@...aro.org>, Ricardo Neri <ricardo.neri@...el.com>, Ravi V. Shankar <ravi.v.shankar@...el.com>, Ben Segall
<bsegall@...gle.com>, Daniel Bristot de Oliveira <bristot@...hat.com>, Dietmar Eggemann <dietmar.eggemann@....com>, Len Brown <len.brown@...el.com>, Mel Gorman <mgorman@...e.de>, Rafael J. Wysocki
<rafael.j.wysocki@...el.com>, Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>, Steven Rostedt <rostedt@...dmis.org>, Tim Chen <tim.c.chen@...ux.intel.com>, Valentin Schneider
<vschneid@...hat.com>, Ionela Voinescu <ionela.voinescu@....com>, x86@...nel.org, linux-kernel@...r.kernel.org, Shrikanth Hegde <sshegde@...ux.vnet.ibm.com>, Srikar Dronamraju
<srikar@...ux.vnet.ibm.com>, naveen.n.rao@...ux.vnet.ibm.com, Yicong Yang <yangyicong@...ilicon.com>, Barry Song <v-songbaohua@...o.com>, Chen Yu <yu.c.chen@...el.com>, Hillf Danton <hdanton@...a.com>

When balancing sibling domains that have different number of cores,
tasks in respective sibling domain should be proportional to the number
of cores in each domain. In preparation of implementing such a policy,
record the number of cores in a scheduling group.

Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
---
 kernel/sched/sched.h    |  1 +
 kernel/sched/topology.c | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 3d0eb36350d2..5f7f36e45b87 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1860,6 +1860,7 @@ struct sched_group {
 	atomic_t		ref;
 
 	unsigned int		group_weight;
+	unsigned int		cores;
 	struct sched_group_capacity *sgc;
 	int			asym_prefer_cpu;	/* CPU of highest priority in group */
 	int			flags;
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 6d5628fcebcf..4ecdaef3f8ab 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1262,6 +1262,26 @@ build_sched_groups(struct sched_domain *sd, int cpu)
 	return 0;
 }
 
+#ifdef CONFIG_SCHED_SMT
+static inline int sched_group_cores(struct sched_group *sg)
+{
+	struct cpumask *mask = sched_domains_tmpmask2;
+	int cpu, cores = 0;
+
+	cpumask_copy(mask, sched_group_span(sg));
+	for_each_cpu(cpu, mask) {
+		cores++;
+		cpumask_andnot(mask, mask, cpu_smt_mask(cpu));
+	}
+	return cores;
+}
+#else
+static inline int sched_group_cores(struct sched_group *sg)
+{
+	return sg->group_weight;
+}
+#endif
+
 /*
  * Initialize sched groups cpu_capacity.
  *
@@ -1282,6 +1302,7 @@ static void init_sched_groups_capacity(int cpu, struct sched_domain *sd)
 		int cpu, max_cpu = -1;
 
 		sg->group_weight = cpumask_weight(sched_group_span(sg));
+		sg->cores = sched_group_cores(sg);
 
 		if (!(sd->flags & SD_ASYM_PACKING))
 			goto next;
-- 
2.32.0





Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ