[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <71b8c4ab-3f4f-4b45-a9ea-706871463a83@kernel.org>
Date: Wed, 26 Nov 2025 10:56:35 +0100
From: "David Hildenbrand (Red Hat)" <david@...nel.org>
To: Zi Yan <ziy@...dia.com>, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>,
Barry Song <baohua@...nel.org>, Lance Yang <lance.yang@...ux.dev>,
Miaohe Lin <linmiaohe@...wei.com>, Naoya Horiguchi
<nao.horiguchi@...il.com>, Wei Yang <richard.weiyang@...il.com>,
Balbir Singh <balbirs@...dia.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/4] mm/huge_memory: replace can_split_folio() with
direct refcount calculation
> static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int new_order,
> struct page *split_at, struct xa_state *xas,
> struct address_space *mapping, bool do_lru,
> struct list_head *list, enum split_type split_type,
> - pgoff_t end, int *nr_shmem_dropped, int extra_pins)
> + pgoff_t end, int *nr_shmem_dropped)
> {
> struct folio *end_folio = folio_next(folio);
> struct folio *new_folio, *next;
> @@ -3782,7 +3773,7 @@ static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int n
> VM_WARN_ON_ONCE(!mapping && end);
> /* Prevent deferred_split_scan() touching ->_refcount */
> ds_queue = folio_split_queue_lock(folio);
> - if (folio_ref_freeze(folio, 1 + extra_pins)) {
> + if (folio_ref_freeze(folio, folio_cache_ref_count(folio) + 1)) {
> struct swap_cluster_info *ci = NULL;
> struct lruvec *lruvec;
> int expected_refs;
> @@ -3853,7 +3844,7 @@ static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int n
>
> zone_device_private_split_cb(folio, new_folio);
>
> - expected_refs = folio_expected_ref_count(new_folio) + 1;
> + expected_refs = folio_cache_ref_count(new_folio) + 1;
> folio_ref_unfreeze(new_folio, expected_refs);
>
> if (do_lru)
> @@ -3897,7 +3888,7 @@ static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int n
> * Otherwise, a parallel folio_try_get() can grab @folio
> * and its caller can see stale page cache entries.
> */
> - expected_refs = folio_expected_ref_count(folio) + 1;
> + expected_refs = folio_cache_ref_count(folio) + 1;
> folio_ref_unfreeze(folio, expected_refs);
Can we just get rid of the expected_refs variable as well?
Apart from that LGTM, thanks!
Acked-by: David Hildenbrand (Red Hat) <david@...nel.org>
--
Cheers
David
Powered by blists - more mailing lists