[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160331160650.cfc0fa57e97a45e94bc023f4@linux-foundation.org>
Date: Thu, 31 Mar 2016 16:06:50 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Steve Capper <steve.capper@....com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
will.deacon@....com, dwoods@...lanox.com, mhocko@...e.com,
mingo@...nel.org,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH] mm: Exclude HugeTLB pages from THP page_mapped logic
On Tue, 29 Mar 2016 17:39:41 +0100 Steve Capper <steve.capper@....com> wrote:
> HugeTLB pages cannot be split, thus use the compound_mapcount to
> track rmaps.
>
> Currently the page_mapped function will check the compound_mapcount, but
s/the page_mapped function/page_mapped()/. It's so much simpler!
> will also go through the constituent pages of a THP compound page and
> query the individual _mapcount's too.
>
> Unfortunately, the page_mapped function does not distinguish between
> HugeTLB and THP compound pages and assumes that a compound page always
> needs to have HPAGE_PMD_NR pages querying.
>
> For most cases when dealing with HugeTLB this is just inefficient, but
> for scenarios where the HugeTLB page size is less than the pmd block
> size (e.g. when using contiguous bit on ARM) this can lead to crashes.
>
> This patch adjusts the page_mapped function such that we skip the
> unnecessary THP reference checks for HugeTLB pages.
>
> Fixes: e1534ae95004 ("mm: differentiate page_mapped() from page_mapcount() for compound pages")
> Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> Signed-off-by: Steve Capper <steve.capper@....com>
> ---
>
> Hi,
>
> This patch is my approach to fixing a problem that unearthed with
> HugeTLB pages on arm64. We ran with PAGE_SIZE=64KB and placed down 32
> contiguous ptes to create 2MB HugeTLB pages. (We can provide hints to
> the MMU that page table entries are contiguous thus larger TLB entries
> can be used to represent them).
So which kernel version(s) need this patch? I think both 4.4 and 4.5
will crash in this manner? Should we backport the fix into 4.4.x and
4.5.x?
>
> ...
>
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1031,6 +1031,8 @@ static inline bool page_mapped(struct page *page)
> page = compound_head(page);
> if (atomic_read(compound_mapcount_ptr(page)) >= 0)
> return true;
> + if (PageHuge(page))
> + return false;
> for (i = 0; i < hpage_nr_pages(page); i++) {
> if (atomic_read(&page[i]._mapcount) >= 0)
> return true;
page_mapped() is moronically huge. Uninlining it saves 206 bytes per
callsite. It has 40+ callsites.
btw, is anyone else seeing this `make M=' breakage?
akpm3:/usr/src/25> make M=mm
Makefile:679: Cannot use CONFIG_KCOV: -fsanitize-coverage=trace-pc is not supported by compiler
WARNING: Symbol version dump ./Module.symvers
is missing; modules will have no dependencies and modversions.
make[1]: *** No rule to make target `mm/filemap.o', needed by `mm/built-in.o'. Stop.
make: *** [_module_mm] Error 2
It's a post-4.5 thing.
From: Andrew Morton <akpm@...ux-foundation.org>
Subject: mm: uninline page_mapped()
It's huge. Uninlining it saves 206 bytes per callsite. Shaves 4924 bytes
from the x86_64 allmodconfig vmlinux.
Cc: Steve Capper <steve.capper@....com>
Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
---
include/linux/mm.h | 21 +--------------------
mm/util.c | 22 ++++++++++++++++++++++
2 files changed, 23 insertions(+), 20 deletions(-)
diff -puN include/linux/mm.h~mm-uninline-page_mapped include/linux/mm.h
--- a/include/linux/mm.h~mm-uninline-page_mapped
+++ a/include/linux/mm.h
@@ -1019,26 +1019,7 @@ static inline pgoff_t page_file_index(st
return page->index;
}
-/*
- * Return true if this page is mapped into pagetables.
- * For compound page it returns true if any subpage of compound page is mapped.
- */
-static inline bool page_mapped(struct page *page)
-{
- int i;
- if (likely(!PageCompound(page)))
- return atomic_read(&page->_mapcount) >= 0;
- page = compound_head(page);
- if (atomic_read(compound_mapcount_ptr(page)) >= 0)
- return true;
- if (PageHuge(page))
- return false;
- for (i = 0; i < hpage_nr_pages(page); i++) {
- if (atomic_read(&page[i]._mapcount) >= 0)
- return true;
- }
- return false;
-}
+bool page_mapped(struct page *page);
/*
* Return true only if the page has been allocated with
diff -puN mm/util.c~mm-uninline-page_mapped mm/util.c
--- a/mm/util.c~mm-uninline-page_mapped
+++ a/mm/util.c
@@ -346,6 +346,28 @@ void *page_rmapping(struct page *page)
return __page_rmapping(page);
}
+/*
+ * Return true if this page is mapped into pagetables.
+ * For compound page it returns true if any subpage of compound page is mapped.
+ */
+bool page_mapped(struct page *page)
+{
+ int i;
+ if (likely(!PageCompound(page)))
+ return atomic_read(&page->_mapcount) >= 0;
+ page = compound_head(page);
+ if (atomic_read(compound_mapcount_ptr(page)) >= 0)
+ return true;
+ if (PageHuge(page))
+ return false;
+ for (i = 0; i < hpage_nr_pages(page); i++) {
+ if (atomic_read(&page[i]._mapcount) >= 0)
+ return true;
+ }
+ return false;
+}
+EXPORT_SYMBOL(page_mapped);
+
struct anon_vma *page_anon_vma(struct page *page)
{
unsigned long mapping;
_
Powered by blists - more mailing lists