[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 10 Oct 2015 20:53:12 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: heiko.carstens@...ibm.com, Tejun Heo <tj@...nel.org>,
Ingo Molnar <mingo@...nel.org>, Rik van Riel <riel@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH 2/3] sched: change select_fallback_rq() to use
for_each_cpu_and()
We can make "cpumask *nodemask" more local and use for_each_cpu_and()
to simplify this code a little bit.
And NUMA_NO_NODE looks better than "-1".
Signed-off-by: Oleg Nesterov <oleg@...hat.com>
---
kernel/sched/core.c | 14 ++++++--------
1 files changed, 6 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a2ef0cf..e4fa6be 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1308,24 +1308,22 @@ static inline bool cpu_allowed(int cpu)
static int select_fallback_rq(int cpu, struct task_struct *p)
{
int nid = cpu_to_node(cpu);
- const struct cpumask *nodemask = NULL;
enum { cpuset, possible, fail } state = cpuset;
int dest_cpu;
/*
* If the node that the cpu is on has been offlined, cpu_to_node()
- * will return -1. There is no cpu on the node, and we should
- * select the cpu on the other node.
+ * will return NUMA_NO_NODE. There is no cpu on the node, and we
+ * should select the cpu on the other node.
*/
- if (nid != -1) {
- nodemask = cpumask_of_node(nid);
+ if (nid != NUMA_NO_NODE) {
+ const struct cpumask *nodemask = cpumask_of_node(nid);
/* Look for allowed, online CPU in same node. */
- for_each_cpu(dest_cpu, nodemask) {
+ for_each_cpu_and(dest_cpu, nodemask, tsk_cpus_allowed(p)) {
if (!cpu_allowed(dest_cpu))
continue;
- if (cpumask_test_cpu(dest_cpu, tsk_cpus_allowed(p)))
- return dest_cpu;
+ return dest_cpu;
}
}
--
1.5.5.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists