[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170831115802.26365-5-brendan.jackman@arm.com>
Date: Thu, 31 Aug 2017 12:58:01 +0100
From: Brendan Jackman <brendan.jackman@....com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org
Cc: Dietmar Eggemann <dietmar.eggemann@....com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Josef Bacik <josef@...icpanda.com>,
Morten Rasmussen <morten.rasmussen@....com>
Subject: [PATCH v3 4/5] sched/fair: Fix use of find_idlest_group when no groups are allowed
When p is allowed on none of the CPUs in the sched_domain, we
currently return NULL from find_idlest_group, and pointlessly
continue the search on lower sched_domain levels (where p is also not
allowed) before returning prev_cpu regardless (as we have not updated
new_cpu).
Add an explicit check for this case, and a comment to
find_idlest_group. Now when find_idlest_group returns NULL, it always
means that the local group is allowed and idlest.
Signed-off-by: Brendan Jackman <brendan.jackman@....com>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Josef Bacik <josef@...icpanda.com>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Morten Rasmussen <morten.rasmussen@....com>
Cc: Peter Zijlstra <peterz@...radead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org>
---
kernel/sched/fair.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0ce75bb..2608091 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5380,6 +5380,8 @@ static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)
/*
* find_idlest_group finds and returns the least busy CPU group within the
* domain.
+ *
+ * Assumes p is allowed on at least one CPU in sd.
*/
static struct sched_group *
find_idlest_group(struct sched_domain *sd, struct task_struct *p,
@@ -5567,6 +5569,9 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
{
int new_cpu = prev_cpu;
+ if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed))
+ return prev_cpu;
+
while (sd) {
struct sched_group *group;
struct sched_domain *tmp;
--
2.7.4
Powered by blists - more mailing lists