[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240813120328.1275952-2-usamaarif642@gmail.com>
Date: Tue, 13 Aug 2024 13:02:44 +0100
From: Usama Arif <usamaarif642@...il.com>
To: akpm@...ux-foundation.org,
linux-mm@...ck.org
Cc: hannes@...xchg.org,
riel@...riel.com,
shakeel.butt@...ux.dev,
roman.gushchin@...ux.dev,
yuzhao@...gle.com,
david@...hat.com,
baohua@...nel.org,
ryan.roberts@....com,
rppt@...nel.org,
willy@...radead.org,
cerasuolodomenico@...il.com,
corbet@....net,
linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org,
kernel-team@...a.com,
Shuang Zhai <zhais@...gle.com>,
Usama Arif <usamaarif642@...il.com>
Subject: [PATCH v3 1/6] mm: free zapped tail pages when splitting isolated thp
From: Yu Zhao <yuzhao@...gle.com>
If a tail page has only two references left, one inherited from the
isolation of its head and the other from lru_add_page_tail() which we
are about to drop, it means this tail page was concurrently zapped.
Then we can safely free it and save page reclaim or migration the
trouble of trying it.
Signed-off-by: Yu Zhao <yuzhao@...gle.com>
Tested-by: Shuang Zhai <zhais@...gle.com>
Signed-off-by: Usama Arif <usamaarif642@...il.com>
Acked-by: Johannes Weiner <hannes@...xchg.org>
---
mm/huge_memory.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 04ee8abd6475..85a424e954be 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3059,7 +3059,9 @@ static void __split_huge_page(struct page *page, struct list_head *list,
unsigned int new_nr = 1 << new_order;
int order = folio_order(folio);
unsigned int nr = 1 << order;
+ struct folio_batch free_folios;
+ folio_batch_init(&free_folios);
/* complete memcg works before add pages to LRU */
split_page_memcg(head, order, new_order);
@@ -3143,6 +3145,26 @@ static void __split_huge_page(struct page *page, struct list_head *list,
if (subpage == page)
continue;
folio_unlock(new_folio);
+ /*
+ * If a folio has only two references left, one inherited
+ * from the isolation of its head and the other from
+ * lru_add_page_tail() which we are about to drop, it means this
+ * folio was concurrently zapped. Then we can safely free it
+ * and save page reclaim or migration the trouble of trying it.
+ */
+ if (list && folio_ref_freeze(new_folio, 2)) {
+ VM_WARN_ON_ONCE_FOLIO(folio_test_lru(new_folio), new_folio);
+ VM_WARN_ON_ONCE_FOLIO(folio_test_large(new_folio), new_folio);
+ VM_WARN_ON_ONCE_FOLIO(folio_mapped(new_folio), new_folio);
+
+ folio_clear_active(new_folio);
+ folio_clear_unevictable(new_folio);
+ if (!folio_batch_add(&free_folios, folio)) {
+ mem_cgroup_uncharge_folios(&free_folios);
+ free_unref_folios(&free_folios);
+ }
+ continue;
+ }
/*
* Subpages may be freed if there wasn't any mapping
@@ -3153,6 +3175,11 @@ static void __split_huge_page(struct page *page, struct list_head *list,
*/
free_page_and_swap_cache(subpage);
}
+
+ if (free_folios.nr) {
+ mem_cgroup_uncharge_folios(&free_folios);
+ free_unref_folios(&free_folios);
+ }
}
/* Racy check whether the huge page can be split */
--
2.43.5
Powered by blists - more mailing lists