[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1365729237-29711-8-git-send-email-cody@linux.vnet.ibm.com>
Date: Thu, 11 Apr 2013 18:13:39 -0700
From: Cody P Schafer <cody@...ux.vnet.ibm.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Mel Gorman <mgorman@...e.de>, Linux MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Cody P Schafer <cody@...ux.vnet.ibm.com>,
Simon Jeons <simon.jeons@...il.com>
Subject: [RFC PATCH v2 07/25] page_alloc: add return_pages_to_zone() when DYNAMIC_NUMA is enabled.
Add return_pages_to_zone(), which uses return_page_to_zone().
It is a minimized version of __free_pages_ok() which handles adding
pages which have been removed from another zone into a new zone.
Signed-off-by: Cody P Schafer <cody@...ux.vnet.ibm.com>
---
mm/internal.h | 5 ++++-
mm/page_alloc.c | 17 +++++++++++++++++
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/mm/internal.h b/mm/internal.h
index b11e574..a70c77b 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -104,6 +104,10 @@ extern void prep_compound_page(struct page *page, unsigned long order);
#ifdef CONFIG_MEMORY_FAILURE
extern bool is_free_buddy_page(struct page *page);
#endif
+#ifdef CONFIG_DYNAMIC_NUMA
+void return_pages_to_zone(struct page *page, unsigned int order,
+ struct zone *zone);
+#endif
#ifdef CONFIG_MEMORY_HOTPLUG
/*
@@ -114,7 +118,6 @@ extern int ensure_zone_is_initialized(struct zone *zone,
#endif
#if defined CONFIG_COMPACTION || defined CONFIG_CMA
-
/*
* in mm/compaction.c
*/
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 96909bb..1fbf5f2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -442,6 +442,12 @@ static inline void set_page_order(struct page *page, int order)
__SetPageBuddy(page);
}
+static inline void set_free_page_order(struct page *page, int order)
+{
+ set_page_private(page, order);
+ VM_BUG_ON(!PageBuddy(page));
+}
+
static inline void rmv_page_order(struct page *page)
{
__ClearPageBuddy(page);
@@ -738,6 +744,17 @@ static void __free_pages_ok(struct page *page, unsigned int order)
local_irq_restore(flags);
}
+#ifdef CONFIG_DYNAMIC_NUMA
+void return_pages_to_zone(struct page *page, unsigned int order,
+ struct zone *zone)
+{
+ unsigned long flags;
+ local_irq_save(flags);
+ free_one_page(zone, page, order, get_freepage_migratetype(page));
+ local_irq_restore(flags);
+}
+#endif
+
/*
* Read access to zone->managed_pages is safe because it's unsigned long,
* but we still need to serialize writers. Currently all callers of
--
1.8.2.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists