[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-f32d782e31bf079f600dcec126ed117b0577e85c@git.kernel.org>
Date: Mon, 15 May 2017 02:06:00 -0700
From: tip-bot for Lauro Ramos Venancio <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: torvalds@...ux-foundation.org, tglx@...utronix.de,
mingo@...nel.org, linux-kernel@...r.kernel.org,
lvenanci@...hat.com, peterz@...radead.org, hpa@...or.com,
efault@....de
Subject: [tip:sched/core] sched/topology: Optimize build_group_mask()
Commit-ID: f32d782e31bf079f600dcec126ed117b0577e85c
Gitweb: http://git.kernel.org/tip/f32d782e31bf079f600dcec126ed117b0577e85c
Author: Lauro Ramos Venancio <lvenanci@...hat.com>
AuthorDate: Thu, 20 Apr 2017 16:51:40 -0300
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Mon, 15 May 2017 10:15:26 +0200
sched/topology: Optimize build_group_mask()
The group mask is always used in intersection with the group CPUs. So,
when building the group mask, we don't have to care about CPUs that are
not part of the group.
Signed-off-by: Lauro Ramos Venancio <lvenanci@...hat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: lwang@...hat.com
Cc: riel@...hat.com
Link: http://lkml.kernel.org/r/1492717903-5195-2-git-send-email-lvenanci@redhat.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/topology.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 81c8203..5a4d9ae 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -506,12 +506,12 @@ enum s_alloc {
*/
static void build_group_mask(struct sched_domain *sd, struct sched_group *sg)
{
- const struct cpumask *span = sched_domain_span(sd);
+ const struct cpumask *sg_span = sched_group_cpus(sg);
struct sd_data *sdd = sd->private;
struct sched_domain *sibling;
int i;
- for_each_cpu(i, span) {
+ for_each_cpu(i, sg_span) {
sibling = *per_cpu_ptr(sdd->sd, i);
if (!cpumask_test_cpu(i, sched_domain_span(sibling)))
continue;
Powered by blists - more mailing lists