lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1436361633-4970-1-git-send-email-srikar@linux.vnet.ibm.com>
Date:	Wed,  8 Jul 2015 18:50:33 +0530
From:	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To:	Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>
Cc:	linux-kernel@...r.kernel.org, srikar@...ux.vnet.ibm.com,
	Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>
Subject: [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar.

In commit:8a9e62a "sched/numa: Prefer NUMA hotness over cache hotness"
sched feature NUMA was always set to true. However this sched feature was
suppose to be enabled on NUMA boxes only thro set_numabalancing_state().

To get back to the above behaviour, bring back NUMA_FAVOUR_HIGHER feature.
Signed-off-by: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 587a2f6..aea72d5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5676,10 +5676,10 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
 	unsigned long src_faults, dst_faults;
 	int src_nid, dst_nid;
 
-	if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
+	if (!sched_feat(NUMA) || !sched_feat(NUMA_FAVOUR_HIGHER))
 		return -1;
 
-	if (!sched_feat(NUMA))
+	if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
 		return -1;
 
 	src_nid = cpu_to_node(env->src_cpu);
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 83a50e7..d4d4726 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -79,12 +79,13 @@ SCHED_FEAT(LB_MIN, false)
  * numa_balancing=
  */
 #ifdef CONFIG_NUMA_BALANCING
+SCHED_FEAT(NUMA,	false)
 
 /*
- * NUMA will favor moving tasks towards nodes where a higher number of
- * hinting faults are recorded during active load balancing. It will
- * resist moving tasks towards nodes where a lower number of hinting
- * faults have been recorded.
+ * NUMA_FAVOUR_HIGHER will favor moving tasks towards nodes where a
+ * higher number of hinting faults are recorded during active load
+ * balancing. It will resist moving tasks towards nodes where a lower
+ * number of hinting faults have been recorded.
  */
-SCHED_FEAT(NUMA,	true)
+SCHED_FEAT(NUMA_FAVOUR_HIGHER, true)
 #endif

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ