[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221221180848.20774-2-vishal.moola@gmail.com>
Date: Wed, 21 Dec 2022 10:08:45 -0800
From: "Vishal Moola (Oracle)" <vishal.moola@...il.com>
To: linux-mm@...ck.org
Cc: damon@...ts.linux.dev, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, sj@...nel.org, willy@...radead.org,
"Vishal Moola (Oracle)" <vishal.moola@...il.com>
Subject: [PATCH v4 1/4] mm/memory: Add vm_normal_folio()
Introduce a wrapper function called vm_normal_folio(). This function
calls vm_normal_page() and returns the folio of the page found, or null
if no page is found.
This function allows callers to get a folio from a pte, which will
eventually allow them to completely replace their struct page variables
with struct folio instead.
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@...il.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@...radead.org>
---
include/linux/mm.h | 2 ++
mm/memory.c | 10 ++++++++++
2 files changed, 12 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ff46dcab2004..d29bfae4b71f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1968,6 +1968,8 @@ static inline bool can_do_mlock(void) { return false; }
extern int user_shm_lock(size_t, struct ucounts *);
extern void user_shm_unlock(size_t, struct ucounts *);
+struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr,
+ pte_t pte);
struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
pte_t pte);
struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
diff --git a/mm/memory.c b/mm/memory.c
index 37d1763c4d47..4000e9f017e0 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -625,6 +625,16 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
return pfn_to_page(pfn);
}
+struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr,
+ pte_t pte)
+{
+ struct page *page = vm_normal_page(vma, addr, pte);
+
+ if (page)
+ return page_folio(page);
+ return NULL;
+}
+
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t pmd)
--
2.38.1
Powered by blists - more mailing lists