[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1358295894-24167-10-git-send-email-cody@linux.vnet.ibm.com>
Date: Tue, 15 Jan 2013 16:24:46 -0800
From: Cody P Schafer <cody@...ux.vnet.ibm.com>
To: Linux MM <linux-mm@...ck.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Catalin Marinas <catalin.marinas@....com>,
Cody P Schafer <jmesmon@...il.com>,
Cody P Schafer <cody@...ux.vnet.ibm.com>
Subject: [PATCH 09/17] mm/page_alloc: use zone_end_pfn() & zone_spans_pfn()
From: Cody P Schafer <jmesmon@...il.com>
Change several open coded zone_end_pfn()s and a zone_spans_pfn() in the
page allocator to use the provided helper functions.
Signed-off-by: Cody P Schafer <cody@...ux.vnet.ibm.com>
---
mm/page_alloc.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c5d70ce..f8ed277 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1272,7 +1272,7 @@ void mark_free_pages(struct zone *zone)
spin_lock_irqsave(&zone->lock, flags);
- max_zone_pfn = zone->zone_start_pfn + zone->spanned_pages;
+ max_zone_pfn = zone_end_pfn(zone);
for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++)
if (pfn_valid(pfn)) {
struct page *page = pfn_to_page(pfn);
@@ -3775,7 +3775,7 @@ static void setup_zone_migrate_reserve(struct zone *zone)
* the block.
*/
start_pfn = zone->zone_start_pfn;
- end_pfn = start_pfn + zone->spanned_pages;
+ end_pfn = zone_end_pfn(zone);
start_pfn = roundup(start_pfn, pageblock_nr_pages);
reserve = roundup(min_wmark_pages(zone), pageblock_nr_pages) >>
pageblock_order;
@@ -3889,7 +3889,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
* pfn out of zone.
*/
if ((z->zone_start_pfn <= pfn)
- && (pfn < z->zone_start_pfn + z->spanned_pages)
+ && (pfn < zone_end_pfn(z))
&& !(pfn & (pageblock_nr_pages - 1)))
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
@@ -4617,7 +4617,7 @@ static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat)
* for the buddy allocator to function correctly.
*/
start = pgdat->node_start_pfn & ~(MAX_ORDER_NR_PAGES - 1);
- end = pgdat->node_start_pfn + pgdat->node_spanned_pages;
+ end = pgdat_end_pfn(pgdat);
end = ALIGN(end, MAX_ORDER_NR_PAGES);
size = (end - start) * sizeof(struct page);
map = alloc_remap(pgdat->node_id, size);
@@ -5735,8 +5735,7 @@ bool is_pageblock_removable_nolock(struct page *page)
zone = page_zone(page);
pfn = page_to_pfn(page);
- if (zone->zone_start_pfn > pfn ||
- zone->zone_start_pfn + zone->spanned_pages <= pfn)
+ if (!zone_spans_pfn(zone, pfn))
return false;
return !has_unmovable_pages(zone, page, 0, true);
--
1.8.0.3
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists