[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250523091432.17588-1-shivankg@amd.com>
Date: Fri, 23 May 2025 09:14:33 +0000
From: Shivank Garg <shivankg@....com>
To: <akpm@...ux-foundation.org>, <david@...hat.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
CC: <ziy@...dia.com>, <baolin.wang@...ux.alibaba.com>,
<lorenzo.stoakes@...cle.com>, <Liam.Howlett@...cle.com>, <npache@...hat.com>,
<ryan.roberts@....com>, <dev.jain@....com>, <fengwei.yin@...el.com>,
<shivankg@....com>, <bharata@....com>
Subject: [PATCH V2 1/2] mm/khugepaged: clean up refcount check using folio_expected_ref_count()
Use folio_expected_ref_count() instead of open-coded logic in
is_refcount_suitable(). This avoids code duplication and improves
clarity.
Drop is_refcount_suitable() as it is no longer needed.
Suggested-by: David Hildenbrand <david@...hat.com>
Signed-off-by: Shivank Garg <shivankg@....com>
---
mm/khugepaged.c | 19 +++----------------
1 file changed, 3 insertions(+), 16 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index cc945c6ab3bd..19aa4142bb99 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -548,19 +548,6 @@ static void release_pte_pages(pte_t *pte, pte_t *_pte,
}
}
-static bool is_refcount_suitable(struct folio *folio)
-{
- int expected_refcount = folio_mapcount(folio);
-
- if (!folio_test_anon(folio) || folio_test_swapcache(folio))
- expected_refcount += folio_nr_pages(folio);
-
- if (folio_test_private(folio))
- expected_refcount++;
-
- return folio_ref_count(folio) == expected_refcount;
-}
-
static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
unsigned long address,
pte_t *pte,
@@ -652,7 +639,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
* but not from this process. The other process cannot write to
* the page, only trigger CoW.
*/
- if (!is_refcount_suitable(folio)) {
+ if (folio_expected_ref_count(folio) != folio_ref_count(folio)) {
folio_unlock(folio);
result = SCAN_PAGE_COUNT;
goto out;
@@ -1402,7 +1389,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
* has excessive GUP pins (i.e. 512). Anyway the same check
* will be done again later the risk seems low.
*/
- if (!is_refcount_suitable(folio)) {
+ if (folio_expected_ref_count(folio) != folio_ref_count(folio)) {
result = SCAN_PAGE_COUNT;
goto out_unmap;
}
@@ -2320,7 +2307,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
break;
}
- if (!is_refcount_suitable(folio)) {
+ if (folio_expected_ref_count(folio) != folio_ref_count(folio)) {
result = SCAN_PAGE_COUNT;
break;
}
--
2.34.1
Powered by blists - more mailing lists