[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-46a73e8a1c1720f7713b5e2df68e9dd272015b5d@git.kernel.org>
Date: Wed, 13 Nov 2013 09:25:22 -0800
From: tip-bot for Rik van Riel <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, hpa@...or.com, mingo@...nel.org,
peterz@...radead.org, riel@...hat.com, mgorman@...e.de,
tglx@...utronix.de, prarit@...hat.com
Subject: [tip:sched/urgent] sched/numa:
Fix NULL pointer dereference in task_numa_migrate()
Commit-ID: 46a73e8a1c1720f7713b5e2df68e9dd272015b5d
Gitweb: http://git.kernel.org/tip/46a73e8a1c1720f7713b5e2df68e9dd272015b5d
Author: Rik van Riel <riel@...hat.com>
AuthorDate: Mon, 11 Nov 2013 19:29:25 -0500
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Wed, 13 Nov 2013 13:33:51 +0100
sched/numa: Fix NULL pointer dereference in task_numa_migrate()
The cpusets code can split up the scheduler's domain tree into
smaller domains. Some of those smaller domains may not cross
NUMA nodes at all, leading to a NULL pointer dereference on the
per-cpu sd_numa pointer.
Tasks cannot be migrated out of their domain, so the patch
also sets p->numa_preferred_nid to whereever they are, to
prevent the migration from being retried over and over again.
Reported-by: Prarit Bhargava <prarit@...hat.com>
Signed-off-by: Rik van Riel <riel@...hat.com>
Signed-off-by: Peter Zijlstra <peterz@...radead.org>
Cc: Mel Gorman <mgorman@...e.de>
Link: http://lkml.kernel.org/n/tip-oosqomw0Jput0Jkvoowhrqtu@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/fair.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index df77c60..c11e36f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1201,9 +1201,21 @@ static int task_numa_migrate(struct task_struct *p)
*/
rcu_read_lock();
sd = rcu_dereference(per_cpu(sd_numa, env.src_cpu));
- env.imbalance_pct = 100 + (sd->imbalance_pct - 100) / 2;
+ if (sd)
+ env.imbalance_pct = 100 + (sd->imbalance_pct - 100) / 2;
rcu_read_unlock();
+ /*
+ * Cpusets can break the scheduler domain tree into smaller
+ * balance domains, some of which do not cross NUMA boundaries.
+ * Tasks that are "trapped" in such domains cannot be migrated
+ * elsewhere, so there is no point in (re)trying.
+ */
+ if (unlikely(!sd)) {
+ p->numa_preferred_nid = cpu_to_node(task_cpu(p));
+ return -EINVAL;
+ }
+
taskweight = task_weight(p, env.src_nid);
groupweight = group_weight(p, env.src_nid);
update_numa_stats(&env.src_stats, env.src_nid);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists