[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220706073045.1398379-1-liushixin2@huawei.com>
Date: Wed, 6 Jul 2022 15:30:45 +0800
From: Liu Shixin <liushixin2@...wei.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>, Jan Kara <jack@...e.cz>,
William Kucharski <william.kucharski@...cle.com>,
"Christoph Hellwig" <hch@....de>
CC: <linux-kernel@...r.kernel.org>, <stable@...r.kernel.org>,
Liu Shixin <liushixin2@...wei.com>
Subject: [PATCH 5.15 v2] mm/filemap: fix UAF in find_lock_entries
Release refcount after xas_set to fix UAF which may cause panic like this:
page:ffffea000491fa40 refcount:1 mapcount:0 mapping:0000000000000000 index:0x1 pfn:0x1247e9
head:ffffea000491fa00 order:3 compound_mapcount:0 compound_pincount:0
memcg:ffff888104f91091
flags: 0x2fffff80010200(slab|head|node=0|zone=2|lastcpupid=0x1fffff)
...
page dumped because: VM_BUG_ON_PAGE(PageTail(page))
------------[ cut here ]------------
kernel BUG at include/linux/page-flags.h:632!
invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN
CPU: 1 PID: 7642 Comm: sh Not tainted 5.15.51-dirty #26
...
Call Trace:
<TASK>
__invalidate_mapping_pages+0xe7/0x540
drop_pagecache_sb+0x159/0x320
iterate_supers+0x120/0x240
drop_caches_sysctl_handler+0xaa/0xe0
proc_sys_call_handler+0x2b4/0x480
new_sync_write+0x3d6/0x5c0
vfs_write+0x446/0x7a0
ksys_write+0x105/0x210
do_syscall_64+0x35/0x80
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f52b5733130
...
This problem has been fixed on mainline by patch 6b24ca4a1a8d ("mm: Use
multi-index entries in the page cache") since it deletes the related code.
Fixes: 5c211ba29deb ("mm: add and use find_lock_entries")
Signed-off-by: Liu Shixin <liushixin2@...wei.com>
---
mm/filemap.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 00e391e75880..2c65dd314c49 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2090,7 +2090,11 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start,
rcu_read_lock();
while ((page = find_get_entry(&xas, end, XA_PRESENT))) {
+ unsigned long next_idx = xas.xa_index;
+
if (!xa_is_value(page)) {
+ if (PageTransHuge(page))
+ next_idx = page->index + thp_nr_pages(page);
if (page->index < start)
goto put;
if (page->index + thp_nr_pages(page) - 1 > end)
@@ -2111,11 +2115,9 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start,
put:
put_page(page);
next:
- if (!xa_is_value(page) && PageTransHuge(page)) {
- unsigned int nr_pages = thp_nr_pages(page);
-
+ if (next_idx != xas.xa_index) {
/* Final THP may cross MAX_LFS_FILESIZE on 32-bit */
- xas_set(&xas, page->index + nr_pages);
+ xas_set(&xas, next_idx);
if (xas.xa_index < nr_pages)
break;
}
--
2.25.1
Powered by blists - more mailing lists