lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <bd6e4a54-369b-4b79-bc4b-10a583a1c3de@redhat.com>
Date: Tue, 22 Apr 2025 11:00:50 +0200
From: David Hildenbrand <david@...hat.com>
To: nifan.cxl@...il.com, muchun.song@...ux.dev, willy@...radead.org
Cc: mcgrof@...nel.org, a.manzanares@...sung.com, dave@...olabs.net,
 akpm@...ux-foundation.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
 Fan Ni <fan.ni@...sung.com>
Subject: Re: [PATCH v2 4/4] mm/hugetlb: Convert use of struct page to folio in
 __unmap_hugepage_range()

On 18.04.25 18:57, nifan.cxl@...il.com wrote:
> From: Fan Ni <fan.ni@...sung.com>
> 
> In __unmap_hugepage_range(), the "page" pointer always points to the
> first page of a huge page, which guarantees there is a folio associating
> with it.  Convert the "page" pointer to use folio.
> 
> Signed-off-by: Fan Ni <fan.ni@...sung.com>
> ---
> This is a new patch added to the series based on the discussion here:
> https://lore.kernel.org/linux-mm/aAHUluy7T32ZlYg7@debian/T/#m2b9cc1743e1907e52658815b297b9d249474f387
> ---
>   mm/hugetlb.c | 18 +++++++++---------
>   1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 7d280ab23784..8177a3fe47d7 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5840,7 +5840,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
>   	pte_t *ptep;
>   	pte_t pte;
>   	spinlock_t *ptl;
> -	struct page *page;
> +	struct folio *folio;
>   	struct hstate *h = hstate_vma(vma);
>   	unsigned long sz = huge_page_size(h);
>   	bool adjust_reservation = false;
> @@ -5904,14 +5904,14 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
>   			continue;
>   		}
>   
> -		page = pte_page(pte);
> +		folio = page_folio(pte_page(pte));
>   		/*
>   		 * If a reference page is supplied, it is because a specific
>   		 * page is being unmapped, not a range. Ensure the page we
>   		 * are about to unmap is the actual page of interest.
>   		 */
>   		if (ref_folio) {
> -			if (page != folio_page(ref_folio, 0)) {
> +			if (folio != ref_folio) {
>   				spin_unlock(ptl);
>   				continue;
>   			}

What about something like (keeping in mind that I suggest renaming 
"ref_folio" -> "folio" in previous patches)


const bool folio_provided = !!folio;

...

if (folio_provided) {
	if (folio != page_folio(pte_page(pte))) {
		spin_unlock(ptl);
		continue;
	}
	...
} else {
	folio = page_folio(pte_page(pte);
}

...

if (folio_supplied)
	break;
...

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ