[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240327075516.1367097-1-zhaoyang.huang@unisoc.com>
Date: Wed, 27 Mar 2024 15:55:16 +0800
From: "zhaoyang.huang" <zhaoyang.huang@...soc.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox
<willy@...radead.org>,
Christoph Hellwig <hch@...radead.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>,
Zhaoyang Huang
<huangzhaoyang@...il.com>, <steve.kang@...soc.com>
Subject: [PATCH] mm: get the folio's refcnt before clear PG_lru in folio_isolate_lru
From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
Bellowing race happens when the caller of folio_isolate_lru rely on the
refcnt of page cache. Moving folio_get ahead of folio_test_clear_lru to
make it more robust.
0. Thread_isolate calls folio_isolate_lru by holding one refcnt of page
cache and get preempted before folio_get.
folio_isolate_lru
VM_BUG_ON(!folio->refcnt)
if (folio_test_clear_lru(folio))
<preempted>
folio_get()
1. Thread_release calls release_pages and meet the scenario of the folio
get its refcnt of page cache removed before folio_put_testzero
release_pages
<folio is removed from page cache>
folio_put_testzero(folio) == true
<refcnt added by collection is the only one here and get
deducted>
if(folio_test_clear_lru(folio))
lruvec_del_folio(folio)
<folio failed to be deleted from LRU>
list_add(folio, pages_to_free);
<LRU's integrity is broken by above list_add>
Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
---
mm/vmscan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 3ef654addd44..42f15ca06e09 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1731,10 +1731,10 @@ bool folio_isolate_lru(struct folio *folio)
VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio);
+ folio_get(folio);
if (folio_test_clear_lru(folio)) {
struct lruvec *lruvec;
- folio_get(folio);
lruvec = folio_lruvec_lock_irq(folio);
lruvec_del_folio(lruvec, folio);
unlock_page_lruvec_irq(lruvec);
--
2.25.1
Powered by blists - more mailing lists