lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aAEteJh4Gb8R7gPm@debian>
Date: Thu, 17 Apr 2025 09:34:00 -0700
From: Fan Ni <nifan.cxl@...il.com>
To: Sidhartha Kumar <sidhartha.kumar@...cle.com>
Cc: nifan.cxl@...il.com, muchun.song@...ux.dev, willy@...radead.org,
	mcgrof@...nel.org, a.manzanares@...sung.com, dave@...olabs.net,
	akpm@...ux-foundation.org, david@...hat.com, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] mm/hugetlb: Refactor __unmap_hugepage_range() to
 take folio instead of page

On Thu, Apr 17, 2025 at 12:21:55PM -0400, Sidhartha Kumar wrote:
> On 4/17/25 11:43 AM, nifan.cxl@...il.com wrote:
> > From: Fan Ni <fan.ni@...sung.com>
> > 
> > The function __unmap_hugepage_range() has two kinds of users:
> > 1) unmap_hugepage_range(), which passes in the head page of a folio.
> >     Since unmap_hugepage_range() already takes folio and there are no other
> >     uses of the folio struct in the function, it is natural for
> >     __unmap_hugepage_range() to take folio also.
> > 2) All other uses, which pass in NULL pointer.
> > 
> > In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to
> > take folio.
> > 
> > Signed-off-by: Fan Ni <fan.ni@...sung.com>
> > ---
> > 
> > Question: If the change in the patch makes sense, should we try to convert all
> > "page" uses in __unmap_hugepage_range() to folio?
> > 
> 
> For this to be correct, we have to ensure that the pte in:
> 
> 	page = pte_page(pte);
> 
> only refers to the pte of a head page. pte comes from:
> 
> 	pte = huge_ptep_get(mm, address, ptep);
> 
> and in the for loop above:
> 		
> 	for (; address < end; address += sz)
> 
> address is incremented by the huge page size so I think address here only
> points to head pages of hugetlb folios and it would make sense to convert
> page to folio here.
> 

Thanks Sidhartha for reviewing the series. I have similar understanding and
wanted to get confirmation from experts in this area.

Thanks.
Fan

> > ---
> >   include/linux/hugetlb.h |  2 +-
> >   mm/hugetlb.c            | 10 +++++-----
> >   2 files changed, 6 insertions(+), 6 deletions(-)
> > 
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index b7699f35c87f..d6c503dd2f7d 100644
> > --- a/include/linux/hugetlb.h
> > +++ b/include/linux/hugetlb.h
> > @@ -133,7 +133,7 @@ void unmap_hugepage_range(struct vm_area_struct *,
> >   void __unmap_hugepage_range(struct mmu_gather *tlb,
> >   			  struct vm_area_struct *vma,
> >   			  unsigned long start, unsigned long end,
> > -			  struct page *ref_page, zap_flags_t zap_flags);
> > +			  struct folio *ref_folio, zap_flags_t zap_flags);
> >   void hugetlb_report_meminfo(struct seq_file *);
> >   int hugetlb_report_node_meminfo(char *buf, int len, int nid);
> >   void hugetlb_show_meminfo_node(int nid);
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 3181dbe0c4bb..7d280ab23784 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -5833,7 +5833,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
> >   void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> >   			    unsigned long start, unsigned long end,
> > -			    struct page *ref_page, zap_flags_t zap_flags)
> > +			    struct folio *ref_folio, zap_flags_t zap_flags)
> >   {
> >   	struct mm_struct *mm = vma->vm_mm;
> >   	unsigned long address;
> > @@ -5910,8 +5910,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> >   		 * page is being unmapped, not a range. Ensure the page we
> >   		 * are about to unmap is the actual page of interest.
> >   		 */
> > -		if (ref_page) {
> > -			if (page != ref_page) {
> > +		if (ref_folio) {
> > +			if (page != folio_page(ref_folio, 0)) {
> >   				spin_unlock(ptl);
> >   				continue;
> >   			}
> > @@ -5977,7 +5977,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> >   		/*
> >   		 * Bail out after unmapping reference page if supplied
> >   		 */
> > -		if (ref_page)
> > +		if (ref_folio)
> >   			break;
> >   	}
> >   	tlb_end_vma(tlb, vma);
> > @@ -6052,7 +6052,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
> >   	tlb_gather_mmu(&tlb, vma->vm_mm);
> >   	__unmap_hugepage_range(&tlb, vma, start, end,
> > -			       folio_page(ref_folio, 0), zap_flags);
> > +			       ref_folio, zap_flags);
> >   	mmu_notifier_invalidate_range_end(&range);
> >   	tlb_finish_mmu(&tlb);
> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@...cle.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ