[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1452189456-8486-3-git-send-email-mgorman@techsingularity.net>
Date: Thu, 7 Jan 2016 17:57:33 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Linux-Stable <stable@...r.kernel.org>
Cc: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...hsingularity.net>
Subject: [PATCH 2/5] sched/numa: Disable sched_numa_balancing on UMA systems
From: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
commit c3b9bc5bbfc3750570d788afffd431263ef695c6 upstream.
Commit 2a1ed24 ("sched/numa: Prefer NUMA hotness over cache hotness")
sets sched feature NUMA to true. However this can enable NUMA hinting
faults on a UMA system.
This commit ensures that NUMA hinting faults occur only on a NUMA system
by setting/resetting sched_numa_balancing.
This commit:
- Makes sched_numa_balancing common to CONFIG_SCHED_DEBUG and
!CONFIG_SCHED_DEBUG. Earlier it was only in !CONFIG_SCHED_DEBUG.
- Checks for sched_numa_balancing instead of sched_feat(NUMA).
Signed-off-by: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mel Gorman <mgorman@...e.de>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Rik van Riel <riel@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/1439290813-6683-3-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
---
kernel/sched/core.c | 14 +++++---------
kernel/sched/fair.c | 4 ++--
kernel/sched/sched.h | 6 ------
3 files changed, 7 insertions(+), 17 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 98594051d6d5..6cd6ce1fe161 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2115,22 +2115,18 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
}
#ifdef CONFIG_NUMA_BALANCING
-#ifdef CONFIG_SCHED_DEBUG
+__read_mostly bool sched_numa_balancing;
+
void set_numabalancing_state(bool enabled)
{
+ sched_numa_balancing = enabled;
+#ifdef CONFIG_SCHED_DEBUG
if (enabled)
sched_feat_set("NUMA");
else
sched_feat_set("NO_NUMA");
-}
-#else
-__read_mostly bool sched_numa_balancing;
-
-void set_numabalancing_state(bool enabled)
-{
- sched_numa_balancing = enabled;
-}
#endif /* CONFIG_SCHED_DEBUG */
+}
#ifdef CONFIG_PROC_SYSCTL
int sysctl_numa_balancing(struct ctl_table *table, int write,
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 33d02d6a3cf4..0d5987e8e329 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5525,10 +5525,10 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
unsigned long src_faults, dst_faults;
int src_nid, dst_nid;
- if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
+ if (!sched_numa_balancing)
return -1;
- if (!sched_feat(NUMA))
+ if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
return -1;
src_nid = cpu_to_node(env->src_cpu);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 252cfbb78348..fb75db386adc 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1004,14 +1004,8 @@ extern struct static_key sched_feat_keys[__SCHED_FEAT_NR];
#endif /* SCHED_DEBUG && HAVE_JUMP_LABEL */
#ifdef CONFIG_NUMA_BALANCING
-#define sched_feat_numa(x) sched_feat(x)
-#ifdef CONFIG_SCHED_DEBUG
-#define sched_numa_balancing sched_feat_numa(NUMA)
-#else
extern bool sched_numa_balancing;
-#endif /* CONFIG_SCHED_DEBUG */
#else
-#define sched_feat_numa(x) (0)
#define sched_numa_balancing (0)
#endif /* CONFIG_NUMA_BALANCING */
--
2.6.4
Powered by blists - more mailing lists