[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1373594635-131067-2-git-send-email-holt@sgi.com>
Date: Thu, 11 Jul 2013 21:03:52 -0500
From: Robin Holt <holt@....com>
To: "H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...nel.org>
Cc: Robin Holt <holt@....com>, Nate Zimmer <nzimmer@....com>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>, Rob Landley <rob@...dley.net>,
Mike Travis <travis@....com>,
Daniel J Blueman <daniel@...ascale-asia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Greg KH <gregkh@...uxfoundation.org>,
Yinghai Lu <yinghai@...nel.org>, Mel Gorman <mgorman@...e.de>
Subject: [RFC 1/4] memblock: Introduce a for_each_reserved_mem_region iterator.
As part of initializing struct page's in 2MiB chunks, we noticed that
at the end of free_all_bootmem(), there was nothing which had forced
the reserved/allocated 4KiB pages to be initialized.
This helper function will be used for that expansion.
Signed-off-by: Robin Holt <holt@....com>
Signed-off-by: Nate Zimmer <nzimmer@....com>
To: "H. Peter Anvin" <hpa@...or.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Linux Kernel <linux-kernel@...r.kernel.org>
Cc: Linux MM <linux-mm@...ck.org>
Cc: Rob Landley <rob@...dley.net>
Cc: Mike Travis <travis@....com>
Cc: Daniel J Blueman <daniel@...ascale-asia.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Greg KH <gregkh@...uxfoundation.org>
Cc: Yinghai Lu <yinghai@...nel.org>
Cc: Mel Gorman <mgorman@...e.de>
---
include/linux/memblock.h | 18 ++++++++++++++++++
mm/memblock.c | 32 ++++++++++++++++++++++++++++++++
2 files changed, 50 insertions(+)
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index f388203..e99bbd1 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -118,6 +118,24 @@ void __next_free_mem_range_rev(u64 *idx, int nid, phys_addr_t *out_start,
i != (u64)ULLONG_MAX; \
__next_free_mem_range_rev(&i, nid, p_start, p_end, p_nid))
+void __next_reserved_mem_region(u64 *idx, phys_addr_t *out_start,
+ phys_addr_t *out_end);
+
+/**
+ * for_earch_reserved_mem_region - iterate over all reserved memblock areas
+ * @i: u64 used as loop variable
+ * @p_start: ptr to phys_addr_t for start address of the range, can be %NULL
+ * @p_end: ptr to phys_addr_t for end address of the range, can be %NULL
+ *
+ * Walks over reserved areas of memblock in. Available as soon as memblock
+ * is initialized.
+ */
+#define for_each_reserved_mem_region(i, p_start, p_end) \
+ for (i = 0UL, \
+ __next_reserved_mem_region(&i, p_start, p_end); \
+ i != (u64)ULLONG_MAX; \
+ __next_reserved_mem_region(&i, p_start, p_end))
+
#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
int memblock_set_node(phys_addr_t base, phys_addr_t size, int nid);
diff --git a/mm/memblock.c b/mm/memblock.c
index c5fad93..0d7d6e7 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -564,6 +564,38 @@ int __init_memblock memblock_reserve(phys_addr_t base, phys_addr_t size)
}
/**
+ * __next_reserved_mem_region - next function for for_each_reserved_region()
+ * @idx: pointer to u64 loop variable
+ * @out_start: ptr to phys_addr_t for start address of the region, can be %NULL
+ * @out_end: ptr to phys_addr_t for end address of the region, can be %NULL
+ *
+ * Iterate over all reserved memory regions.
+ */
+void __init_memblock __next_reserved_mem_region(u64 *idx,
+ phys_addr_t *out_start,
+ phys_addr_t *out_end)
+{
+ struct memblock_type *rsv = &memblock.reserved;
+
+ if (*idx >= 0 && *idx < rsv->cnt) {
+ struct memblock_region *r = &rsv->regions[*idx];
+ phys_addr_t base = r->base;
+ phys_addr_t size = r->size;
+
+ if (out_start)
+ *out_start = base;
+ if (out_end)
+ *out_end = base + size - 1;
+
+ *idx += 1;
+ return;
+ }
+
+ /* signal end of iteration */
+ *idx = ULLONG_MAX;
+}
+
+/**
* __next_free_mem_range - next function for for_each_free_mem_range()
* @idx: pointer to u64 loop variable
* @nid: nid: node selector, %MAX_NUMNODES for all nodes
--
1.8.2.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists