[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251223092526.140566-1-ranxiaokai627@163.com>
Date: Tue, 23 Dec 2025 09:25:26 +0000
From: ranxiaokai627@....com
To: akpm@...ux-foundation.org,
vbabka@...e.cz,
surenb@...gle.com,
mhocko@...e.com,
jackmanb@...gle.com,
hannes@...xchg.org,
ziy@...dia.com,
david@...nel.org,
luizcap@...hat.com
Cc: linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
ran.xiaokai@....com.cn,
ranxiaokai627@....com
Subject: [PATCH] mm/page_owner: fix prematurely released rcu_read_lock()
From: Ran Xiaokai <ran.xiaokai@....com.cn>
In CONFIG_SPARSEMEM systems, page_ext uses RCU to synchronize with
memory hotplug operations, ensuring page_ext memory won't be freed
due to MEM_OFFLINE during page_ext data access.
Since page_owner is part of page_ext, rcu_read_lock() must be held
continuously throughout the entire page_owner access period and
should not be released midway. Otherwise, it may cause the
use-after-free issue. The sequence is like this:
CPU0 CPU1
__folio_copy_owner(): MEM_OFFLINE:
page_ext = page_ext_get(&old->page);
old_page_owner = ...
page_ext_put(page_ext);
page_ext = page_ext_get(&newfolio->page);
new_page_owner = ...
page_ext_put(page_ext);
__invalidate_page_ext(pfn);
synchronize_rcu();
__free_page_ext(pfn);
old_page_owner->pid
new_page_owner->order ---> access to freed area
Fixes: 3a812bed3d32a ("mm: page_owner: use new iteration API")
Signed-off-by: Ran Xiaokai <ran.xiaokai@....com.cn>
---
mm/page_owner.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/mm/page_owner.c b/mm/page_owner.c
index b6a394a130ec..5d6860e54be7 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -375,24 +375,25 @@ void __split_page_owner(struct page *page, int old_order, int new_order)
void __folio_copy_owner(struct folio *newfolio, struct folio *old)
{
struct page_ext *page_ext;
+ struct page_ext *old_page_ext, *new_page_ext;
struct page_ext_iter iter;
struct page_owner *old_page_owner;
struct page_owner *new_page_owner;
depot_stack_handle_t migrate_handle;
- page_ext = page_ext_get(&old->page);
- if (unlikely(!page_ext))
+ old_page_ext = page_ext_get(&old->page);
+ if (unlikely(!old_page_ext))
return;
- old_page_owner = get_page_owner(page_ext);
- page_ext_put(page_ext);
+ old_page_owner = get_page_owner(old_page_ext);
- page_ext = page_ext_get(&newfolio->page);
- if (unlikely(!page_ext))
+ new_page_ext = page_ext_get(&newfolio->page);
+ if (unlikely(!new_page_ext)) {
+ page_ext_put(old_page_ext);
return;
+ }
- new_page_owner = get_page_owner(page_ext);
- page_ext_put(page_ext);
+ new_page_owner = get_page_owner(new_page_ext);
migrate_handle = new_page_owner->handle;
__update_page_owner_handle(&newfolio->page, old_page_owner->handle,
@@ -414,12 +415,12 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old)
* for the new one and the old folio otherwise there will be an imbalance
* when subtracting those pages from the stack.
*/
- rcu_read_lock();
for_each_page_ext(&old->page, 1 << new_page_owner->order, page_ext, iter) {
old_page_owner = get_page_owner(page_ext);
old_page_owner->handle = migrate_handle;
}
- rcu_read_unlock();
+ page_ext_put(new_page_ext);
+ page_ext_put(old_page_ext);
}
void pagetypeinfo_showmixedcount_print(struct seq_file *m,
--
2.25.1
Powered by blists - more mailing lists