lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d0c7b14a-6ce9-4e3a-8cd8-7cce4ee7d7cc@redhat.com>
Date: Mon, 14 Jul 2025 17:30:17 +0200
From: David Hildenbrand <david@...hat.com>
To: Zi Yan <ziy@...dia.com>, Balbir Singh <balbirs@...dia.com>,
 linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>, Hugh Dickins
 <hughd@...gle.com>, Kirill Shutemov <k.shutemov@...il.com>,
 Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
 Baolin Wang <baolin.wang@...ux.alibaba.com>,
 "Liam R. Howlett" <Liam.Howlett@...cle.com>, Nico Pache <npache@...hat.com>,
 Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>,
 Barry Song <baohua@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] mm/huge_memory: move unrelated code out of
 __split_unmapped_folio()

On 11.07.25 20:23, Zi Yan wrote:
> remap(), folio_ref_unfreeze(), lru_add_split_folio() are not relevant to
> splitting unmapped folio operations. Move them out to the caller so that
> __split_unmapped_folio() only handles unmapped folio splits. This makes
> __split_unmapped_folio() reusable.
> 
> Convert VM_BUG_ON(mapping) to use VM_WARN_ON_ONCE_FOLIO().
> 
> Signed-off-by: Zi Yan <ziy@...dia.com>
> ---

[...]

> -	if (folio_test_swapcache(folio)) {
> -		VM_BUG_ON(mapping);
> -
> -		/* a swapcache folio can only be uniformly split to order-0 */
> -		if (!uniform_split || new_order != 0)
> -			return -EINVAL;
> -
> -		swap_cache = swap_address_space(folio->swap);
> -		xa_lock(&swap_cache->i_pages);
> -	}
> -
>   	if (folio_test_anon(folio))
>   		mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>   
> -	/* lock lru list/PageCompound, ref frozen by page_ref_freeze */
> -	lruvec = folio_lruvec_lock(folio);
>   

Nit: now double empty line.

>   	folio_clear_has_hwpoisoned(folio);
>   
> @@ -3480,9 +3451,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>   	for (split_order = start_order;
>   	     split_order >= new_order && !stop_split;
>   	     split_order--) {
> -		int old_order = folio_order(folio);
> -		struct folio *release;
>   		struct folio *end_folio = folio_next(folio);
> +		int old_order = folio_order(folio);
> +		struct folio *new_folio;
>   
>   		/* order-1 anonymous folio is not supported */
>   		if (folio_test_anon(folio) && split_order == 1)
> @@ -3517,113 +3488,34 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>   
>   after_split:
>   		/*
> -		 * Iterate through after-split folios and perform related
> -		 * operations. But in buddy allocator like split, the folio
> +		 * Iterate through after-split folios and update folio stats.
> +		 * But in buddy allocator like split, the folio
>   		 * containing the specified page is skipped until its order
>   		 * is new_order, since the folio will be worked on in next
>   		 * iteration.
>   		 */
> -		for (release = folio; release != end_folio; release = next) {
> -			next = folio_next(release);
> +		for (new_folio = folio; new_folio != end_folio; new_folio = next) {
> +			next = folio_next(new_folio);
>   			/*
> -			 * for buddy allocator like split, the folio containing
> -			 * page will be split next and should not be released,
> -			 * until the folio's order is new_order or stop_split
> -			 * is set to true by the above xas_split() failure.
> +			 * for buddy allocator like split, new_folio containing
> +			 * page could be split again, thus do not change stats
> +			 * yet. Wait until new_folio's order is new_order or
> +			 * stop_split is set to true by the above xas_split()
> +			 * failure.
>   			 */
> -			if (release == page_folio(split_at)) {
> -				folio = release;
> +			if (new_folio == page_folio(split_at)) {
> +				folio = new_folio;
>   				if (split_order != new_order && !stop_split)
>   					continue;
>   			}
> -			if (folio_test_anon(release)) {
> -				mod_mthp_stat(folio_order(release),
> +			if (folio_test_anon(new_folio)) {
> +				mod_mthp_stat(folio_order(new_folio),
>   						MTHP_STAT_NR_ANON, 1);
>   			}

Nit: {} can be dropped

Code is still confusing, so could be that I miss something, but in general
looks like an improvement to me.

I think we can easily get rid of the goto label in __split_unmapped_folio() doing something like

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 14bc0b54cf9f0..db0ae957a0ba8 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3435,18 +3435,18 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
                                 if (xas_error(xas)) {
                                         ret = xas_error(xas);
                                         stop_split = true;
-                                       goto after_split;
                                 }
                         }
                 }
  
-               folio_split_memcg_refs(folio, old_order, split_order);
-               split_page_owner(&folio->page, old_order, split_order);
-               pgalloc_tag_split(folio, old_order, split_order);
+               if (!stop_split) {
+                       folio_split_memcg_refs(folio, old_order, split_order);
+                       split_page_owner(&folio->page, old_order, split_order);
+                       pgalloc_tag_split(folio, old_order, split_order);
  
-               __split_folio_to_order(folio, old_order, split_order);
+                       __split_folio_to_order(folio, old_order, split_order);
+               }
  
-after_split:
                 /*
                  * Iterate through after-split folios and update folio stats.
                  * But in buddy allocator like split, the folio



-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ