[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1457525450-4262-3-git-send-email-khandual@linux.vnet.ibm.com>
Date: Wed, 9 Mar 2016 17:40:44 +0530
From: Anshuman Khandual <khandual@...ux.vnet.ibm.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org
Cc: hughd@...gle.com, kirill@...temov.name, n-horiguchi@...jp.nec.com,
akpm@...ux-foundation.org, mgorman@...hsingularity.net,
aneesh.kumar@...ux.vnet.ibm.com, mpe@...erman.id.au
Subject: [RFC 3/9] mm/gup: Make follow_page_mask function PGD implementation aware
Currently the function 'follow_page_mask' does not take into account
PGD based huge page implementation. This change achieves that and
makes it complete.
Signed-off-by: Anshuman Khandual <khandual@...ux.vnet.ibm.com>
---
mm/gup.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/mm/gup.c b/mm/gup.c
index 7bf19ff..53a2013 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -232,6 +232,12 @@ struct page *follow_page_mask(struct vm_area_struct *vma,
pgd = pgd_offset(mm, address);
if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
return no_page_table(vma, flags);
+ if (pgd_huge(*pgd) && vma->vm_flags & VM_HUGETLB) {
+ page = follow_huge_pgd(mm, address, pgd, flags);
+ if (page)
+ return page;
+ return no_page_table(vma, flags);
+ }
pud = pud_offset(pgd, address);
if (pud_none(*pud))
--
2.1.0
Powered by blists - more mailing lists