[<prev] [next>] [day] [month] [year] [list]
Message-ID:
<TYSPR06MB7158FCD2657E6FDDEF2EDBF8F69EA@TYSPR06MB7158.apcprd06.prod.outlook.com>
Date: Wed, 21 May 2025 11:34:15 +0000
From: "huk23@...udan.edu.cn" <huk23@...udan.edu.cn>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: syzkaller <syzkaller@...glegroups.com>, linux-kernel
<linux-kernel@...r.kernel.org>, Jiaji Qin <jjtan24@...udan.edu.cn>, Shuoran
Bai <baishuoran@...eu.edu.cn>
Subject: KASAN: slab-use-after-free Read in isolate_folios
Dear Maintainers,
When using our customized Syzkaller to fuzz the latest Linux kernel, the following crash (103th)was triggered.
HEAD commit: 6537cfb395f352782918d8ee7b7f10ba2cc3cbf2
git tree: upstream
Output:https://github.com/pghk13/Kernel-Bug/blob/main/0520_6.15-rc6/103_KASAN%3A%20slab-use-after-free%20Read%20in%20isolate_folios/103report.txt
Kernel config:https://github.com/pghk13/Kernel-Bug/blob/main/0520_6.15-rc6/config.txt
C reproducer:https://github.com/pghk13/Kernel-Bug/blob/main/0520_6.15-rc6/103_KASAN%3A%20slab-use-after-free%20Read%20in%20isolate_folios/103repro.c
Syzlang reproducer:https://github.com/pghk13/Kernel-Bug/blob/main/0520_6.15-rc6/103_KASAN%3A%20slab-use-after-free%20Read%20in%20isolate_folios/103repro.txt
This BUG should be a kernel concurrency and memory management issue. GFS2 file system uses RCU mechanism when it releases the gfs2_lock object. In the meantime, the memory reclamation subsystem (kswapd) may meet a folio which is still associated with this gfs2_glock that is to be released or has been released (but RCU has not completed the clean up) while scanning the LRU list to reclaim pages. The isolate_folios function accesses the area of the gfs2_glock object which has been released by mistake while handling this folio, resulting in a slab-use-after-free read.
We have reproduced this issue several times on 6.15-rc6 again.
If you fix this issue, please add the following tag to the commit:
Reported-by: Kun Hu <huk23@...udan.edu.cn>, Jiaji Qin <jjtan24@...udan.edu.cn>, Shuoran Bai <baishuoran@...eu.edu.cn>
==================================================================
BUG: KASAN: slab-use-after-free in isolate_folios+0x157e/0x3fa0
Read of size 8 at addr ffff888029da93a0 by task kswapd0/123
CPU: 0 UID: 0 PID: 123 Comm: kswapd0 Not tainted 6.15.0-rc6 #1 PREEMPT(full)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x116/0x1b0
print_report+0xc1/0x630
kasan_report+0x96/0xd0
kasan_check_range+0xed/0x1a0
isolate_folios+0x157e/0x3fa0
evict_folios+0x19f/0x2060
try_to_shrink_lruvec+0x604/0x9b0
shrink_one+0x412/0x7c0
shrink_node+0x2378/0x3bf0
balance_pgdat+0xa7d/0x1750
kswapd+0x4aa/0xb80
kthread+0x447/0x8a0
ret_from_fork+0x48/0x80
ret_from_fork_asm+0x1a/0x30
</TASK>
Allocated by task 14332:
kasan_save_stack+0x24/0x50
kasan_save_track+0x14/0x30
__kasan_slab_alloc+0x87/0x90
kmem_cache_alloc_noprof+0x166/0x4a0
gfs2_glock_get+0x203/0x11d0
gfs2_inode_lookup+0x266/0x8f0
gfs2_dir_search+0x215/0x2d0
gfs2_lookupi+0x496/0x650
init_inodes+0x6e3/0x2be0
gfs2_fill_super+0x1bd1/0x2d00
get_tree_bdev_flags+0x38a/0x620
gfs2_get_tree+0x4e/0x280
vfs_get_tree+0x93/0x340
path_mount+0x1270/0x1b90
do_mount+0xb3/0x110
__x64_sys_mount+0x193/0x230
do_syscall_64+0xcf/0x260
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Freed by task 28:
kasan_save_stack+0x24/0x50
kasan_save_track+0x14/0x30
kasan_save_free_info+0x3a/0x60
__kasan_slab_free+0x54/0x70
kmem_cache_free+0x14d/0x550
gfs2_glock_dealloc+0xd0/0x150
rcu_core+0x7a4/0x1660
handle_softirqs+0x1be/0x850
run_ksoftirqd+0x3a/0x60
smpboot_thread_fn+0x3d2/0xaa0
kthread+0x447/0x8a0
ret_from_fork+0x48/0x80
ret_from_fork_asm+0x1a/0x30
Last potentially related work creation:
kasan_save_stack+0x24/0x50
kasan_record_aux_stack+0xb0/0xc0
__call_rcu_common.constprop.0+0x99/0x820
gfs2_glock_free+0x35/0xa0
gfs2_glock_put+0x32/0x40
gfs2_glock_put_eventually+0x73/0x90
gfs2_evict_inode+0x8ff/0x15f0
evict+0x3db/0x830
iput+0x513/0x820
gfs2_jindex_free+0x3a6/0x5a0
init_inodes+0x1385/0x2be0
gfs2_fill_super+0x1bd1/0x2d00
get_tree_bdev_flags+0x38a/0x620
gfs2_get_tree+0x4e/0x280
vfs_get_tree+0x93/0x340
path_mount+0x1270/0x1b90
do_mount+0xb3/0x110
__x64_sys_mount+0x193/0x230
do_syscall_64+0xcf/0x260
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Second to last potentially related work creation:
kasan_save_stack+0x24/0x50
kasan_record_aux_stack+0xb0/0xc0
insert_work+0x36/0x240
__queue_work+0x868/0x1240
__queue_delayed_work+0x36b/0x460
queue_delayed_work_on+0x12c/0x140
gfs2_glock_queue_work+0x73/0x110
do_xmote+0x7a0/0xf00
run_queue+0x2ec/0x6a0
glock_work_func+0x127/0x470
process_scheduled_works+0x5de/0x1bd0
worker_thread+0x5a9/0xd10
kthread+0x447/0x8a0
ret_from_fork+0x48/0x80
ret_from_fork_asm+0x1a/0x30
The buggy address belongs to the object at ffff888029da8fd8
which belongs to the cache gfs2_glock(aspace) of size 1224
The buggy address is located 968 bytes inside of
freed 1224-byte region [ffff888029da8fd8, ffff888029da94a0)
The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x29da8
head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0xfff00000000040(head|node=0|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 00fff00000000040 ffff888043a36dc0 dead000000000122 0000000000000000
raw: 0000000000000000 0000000080180018 00000000f5000000 0000000000000000
head: 00fff00000000040 ffff888043a36dc0 dead000000000122 0000000000000000
head: 0000000000000000 0000000080180018 00000000f5000000 0000000000000000
head: 00fff00000000003 ffffea0000a76a01 00000000ffffffff 00000000ffffffff
head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000008
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd2040(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 14332, tgid 14329 (syz.3.4), ts 153619422108, free_ts 153579612259
prep_new_page+0x1b0/0x1e0
get_page_from_freelist+0x1c80/0x3a40
__alloc_frozen_pages_noprof+0x2fd/0x6d0
alloc_pages_mpol+0x209/0x550
new_slab+0x254/0x350
___slab_alloc+0xf0c/0x17c0
__slab_alloc.isra.0+0x56/0xb0
kmem_cache_alloc_noprof+0x273/0x4a0
gfs2_glock_get+0x203/0x11d0
gfs2_inode_lookup+0x266/0x8f0
gfs2_lookup_root+0x57/0x120
init_sb+0xa2a/0x11d0
gfs2_fill_super+0x195a/0x2d00
get_tree_bdev_flags+0x38a/0x620
gfs2_get_tree+0x4e/0x280
vfs_get_tree+0x93/0x340
page last free pid 13807 tgid 13807 stack trace:
__free_frozen_pages+0x7cd/0x1320
__put_partials+0x14c/0x170
qlist_free_all+0x50/0x130
kasan_quarantine_reduce+0x168/0x1c0
__kasan_slab_alloc+0x67/0x90
__kmalloc_noprof+0x1c8/0x600
tomoyo_encode2.part.0+0xec/0x3c0
tomoyo_encode+0x2c/0x60
tomoyo_realpath_from_path+0x187/0x600
tomoyo_path_perm+0x235/0x440
security_inode_getattr+0x122/0x2b0
vfs_getattr+0x26/0x70
vfs_fstat+0x50/0xa0
__do_sys_newfstat+0x83/0x100
do_syscall_64+0xcf/0x260
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Memory state around the buggy address:
ffff888029da9280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888029da9300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff888029da9380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff888029da9400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888029da9480: fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================
thanks,
Kun Hu
Powered by blists - more mailing lists