[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220513191705.457775-1-shy828301@gmail.com>
Date: Fri, 13 May 2022 12:17:05 -0700
From: Yang Shi <shy828301@...il.com>
To: willy@...radead.org, songmuchun@...edance.com,
akpm@...ux-foundation.org
Cc: shy828301@...il.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [v2 PATCH] mm: pvmw: check possible huge PMD map by transhuge_vma_suitable()
IIUC PVMW checks if the vma is possibly huge PMD mapped by
transparent_hugepage_active() and "pvmw->nr_pages >= HPAGE_PMD_NR".
Actually pvmw->nr_pages is returned by compound_nr() or
folio_nr_pages(), so the page should be THP as long as "pvmw->nr_pages
>= HPAGE_PMD_NR". And it is guaranteed THP is allocated for valid VMA
in the first place. But it may be not PMD mapped if the VMA is file
VMA and it is not properly aligned. The transhuge_vma_suitable()
is used to do such check, so replace transparent_hugepage_active() to
it, which is too heavy and overkilling.
Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
Cc: Muchun Song <songmuchun@...edance.com>
Signed-off-by: Yang Shi <shy828301@...il.com>
---
v2: * Fixed build error for !CONFIG_TRANSPARENT_HUGEPAGE
* Removed fixes tag per Willy
include/linux/huge_mm.h | 8 ++++++--
mm/page_vma_mapped.c | 2 +-
2 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index fbf36bb1be22..c2826b1f4069 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -117,8 +117,10 @@ extern struct kobj_attribute shmem_enabled_attr;
extern unsigned long transparent_hugepage_flags;
static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
- unsigned long haddr)
+ unsigned long addr)
{
+ unsigned long haddr;
+
/* Don't have to check pgoff for anonymous vma */
if (!vma_is_anonymous(vma)) {
if (!IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
@@ -126,6 +128,8 @@ static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
return false;
}
+ haddr = addr & HPAGE_PMD_MASK;
+
if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end)
return false;
return true;
@@ -328,7 +332,7 @@ static inline bool transparent_hugepage_active(struct vm_area_struct *vma)
}
static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
- unsigned long haddr)
+ unsigned long addr)
{
return false;
}
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index c10f839fc410..e971a467fcdf 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -243,7 +243,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
* cleared *pmd but not decremented compound_mapcount().
*/
if ((pvmw->flags & PVMW_SYNC) &&
- transparent_hugepage_active(vma) &&
+ transhuge_vma_suitable(vma, pvmw->address) &&
(pvmw->nr_pages >= HPAGE_PMD_NR)) {
spinlock_t *ptl = pmd_lock(mm, pvmw->pmd);
--
2.26.3
Powered by blists - more mailing lists