[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230626171430.3167004-2-ryan.roberts@arm.com>
Date: Mon, 26 Jun 2023 18:14:21 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Yin Fengwei <fengwei.yin@...el.com>,
David Hildenbrand <david@...hat.com>,
Yu Zhao <yuzhao@...gle.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>
Cc: Ryan Roberts <ryan.roberts@....com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-alpha@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-ia64@...r.kernel.org,
linux-m68k@...ts.linux-m68k.org, linux-s390@...r.kernel.org
Subject: [PATCH v1 01/10] mm: Expose clear_huge_page() unconditionally
In preparation for extending vma_alloc_zeroed_movable_folio() to
allocate a arbitrary order folio, expose clear_huge_page()
unconditionally, so that it can be used to zero the allocated folio in
the generic implementation of vma_alloc_zeroed_movable_folio().
Signed-off-by: Ryan Roberts <ryan.roberts@....com>
---
include/linux/mm.h | 3 ++-
mm/memory.c | 2 +-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7f1741bd870a..7e3bf45e6491 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3684,10 +3684,11 @@ enum mf_action_page_type {
*/
extern const struct attribute_group memory_failure_attr_group;
-#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS)
extern void clear_huge_page(struct page *page,
unsigned long addr_hint,
unsigned int pages_per_huge_page);
+
+#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS)
int copy_user_large_folio(struct folio *dst, struct folio *src,
unsigned long addr_hint,
struct vm_area_struct *vma);
diff --git a/mm/memory.c b/mm/memory.c
index fb30f7523550..3d4ea668c4d1 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5741,7 +5741,6 @@ void __might_fault(const char *file, int line)
EXPORT_SYMBOL(__might_fault);
#endif
-#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS)
/*
* Process all subpages of the specified huge page with the specified
* operation. The target subpage will be processed last to keep its
@@ -5839,6 +5838,7 @@ void clear_huge_page(struct page *page,
process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, page);
}
+#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS)
static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
unsigned long addr,
struct vm_area_struct *vma,
--
2.25.1
Powered by blists - more mailing lists