[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190624025604.30896-1-ying.huang@intel.com>
Date: Mon, 24 Jun 2019 10:56:04 +0800
From: Huang Ying <ying.huang@...el.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Huang Ying <ying.huang@...el.com>,
Rik van Riel <riel@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Mel Gorman <mgorman@...e.de>, jhladky@...hat.com,
lvenanci@...hat.com, Ingo Molnar <mingo@...nel.org>
Subject: [PATCH -mm] autonuma: Fix scan period updating
The autonuma scan period should be increased (scanning is slowed down)
if the majority of the page accesses are shared with other processes.
But in current code, the scan period will be decreased (scanning is
speeded up) in that situation.
This patch fixes the code. And this has been tested via tracing the
scan period changing and /proc/vmstat numa_pte_updates counter when
running a multi-threaded memory accessing program (most memory
areas are accessed by multiple threads).
Fixes: 37ec97deb3a8 ("sched/numa: Slow down scan rate if shared faults dominate")
Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
Cc: Rik van Riel <riel@...hat.com>
Cc: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Mel Gorman <mgorman@...e.de>
Cc: jhladky@...hat.com
Cc: lvenanci@...hat.com
Cc: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/fair.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f35930f5e528..79bc4d2d1e58 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1923,7 +1923,7 @@ static void update_task_scan_period(struct task_struct *p,
unsigned long shared, unsigned long private)
{
unsigned int period_slot;
- int lr_ratio, ps_ratio;
+ int lr_ratio, sp_ratio;
int diff;
unsigned long remote = p->numa_faults_locality[0];
@@ -1954,22 +1954,22 @@ static void update_task_scan_period(struct task_struct *p,
*/
period_slot = DIV_ROUND_UP(p->numa_scan_period, NUMA_PERIOD_SLOTS);
lr_ratio = (local * NUMA_PERIOD_SLOTS) / (local + remote);
- ps_ratio = (private * NUMA_PERIOD_SLOTS) / (private + shared);
+ sp_ratio = (shared * NUMA_PERIOD_SLOTS) / (private + shared);
- if (ps_ratio >= NUMA_PERIOD_THRESHOLD) {
+ if (sp_ratio >= NUMA_PERIOD_THRESHOLD) {
/*
- * Most memory accesses are local. There is no need to
- * do fast NUMA scanning, since memory is already local.
+ * Most memory accesses are shared with other tasks.
+ * There is no point in continuing fast NUMA scanning,
+ * since other tasks may just move the memory elsewhere.
*/
- int slot = ps_ratio - NUMA_PERIOD_THRESHOLD;
+ int slot = sp_ratio - NUMA_PERIOD_THRESHOLD;
if (!slot)
slot = 1;
diff = slot * period_slot;
} else if (lr_ratio >= NUMA_PERIOD_THRESHOLD) {
/*
- * Most memory accesses are shared with other tasks.
- * There is no point in continuing fast NUMA scanning,
- * since other tasks may just move the memory elsewhere.
+ * Most memory accesses are local. There is no need to
+ * do fast NUMA scanning, since memory is already local.
*/
int slot = lr_ratio - NUMA_PERIOD_THRESHOLD;
if (!slot)
@@ -1981,7 +1981,7 @@ static void update_task_scan_period(struct task_struct *p,
* yet they are not on the local NUMA node. Speed up
* NUMA scanning to get the memory moved over.
*/
- int ratio = max(lr_ratio, ps_ratio);
+ int ratio = max(lr_ratio, sp_ratio);
diff = -(NUMA_PERIOD_THRESHOLD - ratio) * period_slot;
}
--
2.21.0
Powered by blists - more mailing lists