[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-e054637597ba36d3729ba6a3a3dd7aad8e2a3003@git.kernel.org>
Date: Tue, 9 Oct 2018 00:01:25 -0700
From: tip-bot for Srikar Dronamraju <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: mgorman@...hsingularity.net, linux-kernel@...r.kernel.org,
mingo@...nel.org, peterz@...radead.org, linux-mm@...ck.org,
riel@...riel.com, tglx@...utronix.de,
torvalds@...ux-foundation.org, hpa@...or.com,
srikar@...ux.vnet.ibm.com
Subject: [tip:sched/urgent] mm, sched/numa: Remove remaining traces of NUMA
rate-limiting
Commit-ID: e054637597ba36d3729ba6a3a3dd7aad8e2a3003
Gitweb: https://git.kernel.org/tip/e054637597ba36d3729ba6a3a3dd7aad8e2a3003
Author: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
AuthorDate: Sat, 6 Oct 2018 16:53:19 +0530
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Tue, 9 Oct 2018 08:30:51 +0200
mm, sched/numa: Remove remaining traces of NUMA rate-limiting
Remove the leftover pglist_data::numabalancing_migrate_lock and its
initialization, we stopped using this lock with:
efaffc5e40ae ("mm, sched/numa: Remove rate-limiting of automatic NUMA balancing migration")
[ mingo: Rewrote the changelog. ]
Signed-off-by: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Acked-by: Mel Gorman <mgorman@...hsingularity.net>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Linux-MM <linux-mm@...ck.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Rik van Riel <riel@...riel.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/1538824999-31230-1-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
include/linux/mmzone.h | 4 ----
mm/page_alloc.c | 10 ----------
2 files changed, 14 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 3f4c0b167333..d4b0c79d2924 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -667,10 +667,6 @@ typedef struct pglist_data {
enum zone_type kcompactd_classzone_idx;
wait_queue_head_t kcompactd_wait;
struct task_struct *kcompactd;
-#endif
-#ifdef CONFIG_NUMA_BALANCING
- /* Lock serializing the migrate rate limiting window */
- spinlock_t numabalancing_migrate_lock;
#endif
/*
* This is a per-node reserve of pages that are not available
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 706a738c0aee..e2ef1c17942f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6193,15 +6193,6 @@ static unsigned long __init calc_memmap_size(unsigned long spanned_pages,
return PAGE_ALIGN(pages * sizeof(struct page)) >> PAGE_SHIFT;
}
-#ifdef CONFIG_NUMA_BALANCING
-static void pgdat_init_numabalancing(struct pglist_data *pgdat)
-{
- spin_lock_init(&pgdat->numabalancing_migrate_lock);
-}
-#else
-static void pgdat_init_numabalancing(struct pglist_data *pgdat) {}
-#endif
-
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
static void pgdat_init_split_queue(struct pglist_data *pgdat)
{
@@ -6226,7 +6217,6 @@ static void __meminit pgdat_init_internals(struct pglist_data *pgdat)
{
pgdat_resize_init(pgdat);
- pgdat_init_numabalancing(pgdat);
pgdat_init_split_queue(pgdat);
pgdat_init_kcompactd(pgdat);
Powered by blists - more mailing lists