[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47EABF75.9090605@linux.vnet.ibm.com>
Date: Wed, 26 Mar 2008 16:26:13 -0500
From: Jon Tollefson <kniht@...ux.vnet.ibm.com>
To: linux-kernel@...r.kernel.org,
Linux Memory Management List <linux-mm@...ck.org>,
linuxppc-dev <linuxppc-dev@...abs.org>
CC: Adam Litke <agl@...ux.vnet.ibm.com>,
Andi Kleen <andi@...stfloor.org>,
Paul Mackerras <paulus@...ba.org>
Subject: [PATCH 2/4] powerpc: function for allocating gigantic pages
The 16G page locations have been saved during early boot in an array. The
alloc_bm_huge_page() function adds a page from here to the huge_boot_pages list.
Signed-off-by: Jon Tollefson <kniht@...ux.vnet.ibm.com>
---
hugetlbpage.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 94625db..31d977b 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -29,6 +29,10 @@
#define NUM_LOW_AREAS (0x100000000UL >> SID_SHIFT)
#define NUM_HIGH_AREAS (PGTABLE_RANGE >> HTLB_AREA_SHIFT)
+#define MAX_NUMBER_GPAGES 1024
+
+static void *gpage_freearray[MAX_NUMBER_GPAGES];
+static unsigned nr_gpages;
unsigned int hugepte_shift;
#define PTRS_PER_HUGEPTE (1 << hugepte_shift)
@@ -104,6 +108,21 @@ pmd_t *hpmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long addr)
}
#endif
+/* Put 16G page address into temporary huge page list because the mem_map
+ * is not up yet.
+ */
+int alloc_bm_huge_page(struct hstate *h)
+{
+ struct huge_bm_page *m;
+ if (nr_gpages == 0)
+ return 0;
+ m = gpage_freearray[--nr_gpages];
+ list_add(&m->list, &huge_boot_pages);
+ m->hstate = h;
+ return 1;
+}
+
+
/* Modelled after find_linux_pte() */
pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
{
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists