[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YoSEsa2zvqylYuZC@FVFYT0MHHV2J.usts.net>
Date: Wed, 18 May 2022 13:31:29 +0800
From: Muchun Song <songmuchun@...edance.com>
To: Yang Shi <shy828301@...il.com>
Cc: willy@...radead.org, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [v2 PATCH] mm: pvmw: check possible huge PMD map by
transhuge_vma_suitable()
On Fri, May 13, 2022 at 12:17:05PM -0700, Yang Shi wrote:
> IIUC PVMW checks if the vma is possibly huge PMD mapped by
> transparent_hugepage_active() and "pvmw->nr_pages >= HPAGE_PMD_NR".
>
> Actually pvmw->nr_pages is returned by compound_nr() or
> folio_nr_pages(), so the page should be THP as long as "pvmw->nr_pages
> >= HPAGE_PMD_NR". And it is guaranteed THP is allocated for valid VMA
> in the first place. But it may be not PMD mapped if the VMA is file
> VMA and it is not properly aligned. The transhuge_vma_suitable()
> is used to do such check, so replace transparent_hugepage_active() to
> it, which is too heavy and overkilling.
>
> Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
> Cc: Muchun Song <songmuchun@...edance.com>
> Signed-off-by: Yang Shi <shy828301@...il.com>
> ---
> v2: * Fixed build error for !CONFIG_TRANSPARENT_HUGEPAGE
> * Removed fixes tag per Willy
>
> include/linux/huge_mm.h | 8 ++++++--
> mm/page_vma_mapped.c | 2 +-
> 2 files changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index fbf36bb1be22..c2826b1f4069 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -117,8 +117,10 @@ extern struct kobj_attribute shmem_enabled_attr;
> extern unsigned long transparent_hugepage_flags;
>
> static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
> - unsigned long haddr)
> + unsigned long addr)
> {
> + unsigned long haddr;
> +
> /* Don't have to check pgoff for anonymous vma */
> if (!vma_is_anonymous(vma)) {
> if (!IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
> @@ -126,6 +128,8 @@ static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
> return false;
> }
>
> + haddr = addr & HPAGE_PMD_MASK;
> +
> if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end)
> return false;
> return true;
> @@ -328,7 +332,7 @@ static inline bool transparent_hugepage_active(struct vm_area_struct *vma)
> }
>
> static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
> - unsigned long haddr)
> + unsigned long addr)
> {
> return false;
> }
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index c10f839fc410..e971a467fcdf 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -243,7 +243,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> * cleared *pmd but not decremented compound_mapcount().
> */
> if ((pvmw->flags & PVMW_SYNC) &&
> - transparent_hugepage_active(vma) &&
> + transhuge_vma_suitable(vma, pvmw->address) &&
How about the following diff? Then we do not need to change
transhuge_vma_suitable(). All the users of transhuge_vma_suitable()
are already do the alignment by themselves.
Thanks.
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index c10f839fc410..0aed5ca60c67 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -243,7 +243,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
* cleared *pmd but not decremented compound_mapcount().
*/
if ((pvmw->flags & PVMW_SYNC) &&
- transparent_hugepage_active(vma) &&
+ IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
+ transhuge_vma_suitable(vma, pvmw->address & HPAGE_PMD_MASK) &&
(pvmw->nr_pages >= HPAGE_PMD_NR)) {
spinlock_t *ptl = pmd_lock(mm, pvmw->pmd);
> (pvmw->nr_pages >= HPAGE_PMD_NR)) {
> spinlock_t *ptl = pmd_lock(mm, pvmw->pmd);
>
> --
> 2.26.3
>
>
Powered by blists - more mailing lists