[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1345647560-30387-13-git-send-email-aarcange@redhat.com>
Date: Wed, 22 Aug 2012 16:58:56 +0200
From: Andrea Arcangeli <aarcange@...hat.com>
To: linux-kernel@...r.kernel.org, linux-mm@...ck.org
Cc: Hillf Danton <dhillf@...il.com>, Dan Smith <danms@...ibm.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, Paul Turner <pjt@...gle.com>,
Suresh Siddha <suresh.b.siddha@...el.com>,
Mike Galbraith <efault@....de>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Lai Jiangshan <laijs@...fujitsu.com>,
Bharata B Rao <bharata.rao@...il.com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
Christoph Lameter <cl@...ux.com>,
Alex Shi <alex.shi@...el.com>,
Mauricio Faria de Oliveira <mauricfo@...ux.vnet.ibm.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Don Morris <don.morris@...com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: [PATCH 12/36] autonuma: knuma_migrated per NUMA node queues
This defines the knuma_migrated queues. There is one knuma_migrated
per NUMA node with active CPUs. Pages are added to these queues
through the NUMA hinting page fault (memory follow CPU algorithm with
false sharing evaluation). The daemons are then woken up with a
certain hysteresis to migrate the memory in a round robin fashion from
all remote nodes to the daemon's local node.
The head that belongs to the local node that knuma_migrated runs on,
for now must be empty and it's not being used.
Signed-off-by: Andrea Arcangeli <aarcange@...hat.com>
---
include/linux/mmzone.h | 18 ++++++++++++++++++
mm/page_alloc.c | 11 +++++++++++
2 files changed, 29 insertions(+), 0 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 2daa54f..a5920f8 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -709,6 +709,24 @@ typedef struct pglist_data {
struct task_struct *kswapd; /* Protected by lock_memory_hotplug() */
int kswapd_max_order;
enum zone_type classzone_idx;
+#ifdef CONFIG_AUTONUMA
+ /*
+ * lock serializing all lists with heads in the
+ * autonuma_migrate_head[] array, and the
+ * autonuma_nr_migrate_pages field.
+ */
+ spinlock_t autonuma_lock;
+ /*
+ * All pages from node "page_nid" to be migrated to this node,
+ * will be queued into the list
+ * autonuma_migrate_head[page_nid].
+ */
+ struct list_head autonuma_migrate_head[MAX_NUMNODES];
+ /* number of pages from other nodes queued for migration to this node */
+ unsigned long autonuma_nr_migrate_pages;
+ /* waitqueue for this node knuma_migrated daemon */
+ wait_queue_head_t autonuma_knuma_migrated_wait;
+#endif
} pg_data_t;
#define node_present_pages(nid) (NODE_DATA(nid)->node_present_pages)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a6337b3..8c9cad5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -58,6 +58,7 @@
#include <linux/prefetch.h>
#include <linux/migrate.h>
#include <linux/page-debug-flags.h>
+#include <linux/autonuma.h>
#include <asm/tlbflush.h>
#include <asm/div64.h>
@@ -4391,8 +4392,18 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
int nid = pgdat->node_id;
unsigned long zone_start_pfn = pgdat->node_start_pfn;
int ret;
+#ifdef CONFIG_AUTONUMA
+ int node_iter;
+#endif
pgdat_resize_init(pgdat);
+#ifdef CONFIG_AUTONUMA
+ spin_lock_init(&pgdat->autonuma_lock);
+ init_waitqueue_head(&pgdat->autonuma_knuma_migrated_wait);
+ pgdat->autonuma_nr_migrate_pages = 0;
+ for_each_node(node_iter)
+ INIT_LIST_HEAD(&pgdat->autonuma_migrate_head[node_iter]);
+#endif
init_waitqueue_head(&pgdat->kswapd_wait);
init_waitqueue_head(&pgdat->pfmemalloc_wait);
pgdat_page_cgroup_init(pgdat);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists