[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <202cde0e0909132218k70c31a5u922636914e603ad4@mail.gmail.com>
Date: Mon, 14 Sep 2009 17:18:53 +1200
From: Alexey Korolev <akorolex@...il.com>
To: Mel Gorman <mel@....ul.ie>, Eric Munson <linux-mm@...bm.net>,
Alexey Korolev <akorolev@...radead.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: [PATCH 2/3] Helper which returns the huge page at a given address
(Take 3)
This patch provides helper function which returns the huge page at a
given address for population before the page has been faulted.
It is possible to call hugetlb_get_user_page function in file mmap
procedure to get pages before they have been requested by user level.
include/linux/hugetlb.h | 3 +++
mm/hugetlb.c | 23 +++++++++++++++++++++++
2 files changed, 26 insertions(+)
---
Signed-off-by: Alexey Korolev <akorolev@...radead.org>
diff -aurp clean/include/linux/hugetlb.h patched/include/linux/hugetlb.h
--- clean/include/linux/hugetlb.h 2009-09-11 15:33:48.000000000 +1200
+++ patched/include/linux/hugetlb.h 2009-09-11 20:09:02.000000000 +1200
@@ -39,6 +39,8 @@ int hugetlb_reserve_pages(struct inode *
struct vm_area_struct *vma,
int acctflags);
void hugetlb_unreserve_pages(struct inode *inode, long offset, long freed);
+struct page *hugetlb_get_user_page(struct vm_area_struct *vma,
+ unsigned long address);
extern unsigned long hugepages_treat_as_movable;
extern const unsigned long hugetlb_zero, hugetlb_infinity;
@@ -100,6 +102,7 @@ static inline void hugetlb_report_meminf
#define is_hugepage_only_range(mm, addr, len) 0
#define hugetlb_free_pgd_range(tlb, addr, end, floor, ceiling) ({BUG(); 0; })
#define hugetlb_fault(mm, vma, addr, flags) ({ BUG(); 0; })
+#define hugetlb_get_user_page(vma, address) ERR_PTR(-EINVAL)
#define hugetlb_change_protection(vma, address, end, newprot)
diff -aurp clean/mm/hugetlb.c patched/mm/hugetlb.c
--- clean/mm/hugetlb.c 2009-09-06 11:38:12.000000000 +1200
+++ patched/mm/hugetlb.c 2009-09-11 08:34:00.000000000 +1200
@@ -2187,6 +2187,29 @@ static int huge_zeropage_ok(pte_t *ptep,
return huge_pte_none(huge_ptep_get(ptep));
}
+/*
+ * hugetlb_get_user_page returns the page at a given address for population
+ * before the page has been faulted.
+ */
+struct page *hugetlb_get_user_page(struct vm_area_struct *vma,
+ unsigned long address)
+{
+ int ret;
+ int cnt = 1;
+ struct page *pg;
+ struct hstate *h = hstate_vma(vma);
+
+ address = address & huge_page_mask(h);
+ ret = follow_hugetlb_page(vma->vm_mm, vma, &pg,
+ NULL, &address, &cnt, 0, 0);
+ if (ret < 0)
+ return ERR_PTR(ret);
+ put_page(pg);
+
+ return pg;
+}
+EXPORT_SYMBOL_GPL(hugetlb_get_user_page);
+
int follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
struct page **pages, struct vm_area_struct **vmas,
unsigned long *position, int *length, int i,
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists