lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID:
	<TYSPR06MB71585D5693970E3C090A1DA5F69EA@TYSPR06MB7158.apcprd06.prod.outlook.com>
Date: Wed, 21 May 2025 11:50:20 +0000
From: "huk23@...udan.edu.cn" <huk23@...udan.edu.cn>
To: Kent Overstreet <kent.overstreet@...ux.dev>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"syzkaller@...glegroups.com" <syzkaller@...glegroups.com>,
	白烁冉 <baishuoran@...eu.edu.cn>, "jjtan24@...udan.edu.cn"
	<jjtan24@...udan.edu.cn>
Subject: KASAN: slab-use-after-free Read in bch2_btree_node_read_done

Dear Maintainers,



When using our customized Syzkaller to fuzz the latest Linux kernel, the following crash (104th)was triggered.


HEAD commit: 6537cfb395f352782918d8ee7b7f10ba2cc3cbf2
git tree: upstream
Output:https:https:https://github.com/pghk13/Kernel-Bug/blob/main/0520_6.15-rc6/104_KASAN%3A%20slab-use-after-free%20Read%20in%20bch2_btree_node_read_done/104report.txt
Kernel config:https://github.com/pghk13/Kernel-Bug/blob/main/0520_6.15-rc6/config.txt
C reproducer:https:https://github.com/pghk13/Kernel-Bug/blob/main/0520_6.15-rc6/104_KASAN%3A%20slab-use-after-free%20Read%20in%20bch2_btree_node_read_done/104repro.c
Syzlang reproducer:https://github.com/pghk13/Kernel-Bug/blob/main/0520_6.15-rc6/104_KASAN%3A%20slab-use-after-free%20Read%20in%20bch2_btree_node_read_done/104repro.txt



The bug is a use-after-free error in the btree handling code of the bcachefs filesystem. It most likely occurs in the bch_btree_node_read_done function (defined around line 193), when processing a btree node and validating its contents, the code attempts to a radix tree node that has already been released by the RCU mechanism. The root cause of the error could be a memory management or reference counting issue, especially in the complex of operations during the filesystem recovery. It could also be in the btree_io.c file, around lines 300-350, which handles btree validation and reading.Especially in the call or implementation of validate_bset, validate_bset_keys or bch2_drop_whiteouts


We have reproduced this issue several times on 6.15-rc6 again.



If you fix this issue, please add the following tag to the commit:
Reported-by: Kun Hu <huk23@...udan.edu.cn>, Jiaji Qin <jjtan24@...udan.edu.cn>, Shuoran Bai <baishuoran@...eu.edu.cn>


btree=alloc level=0 u64s 11 type btree_ptr_v2 SPOS_MAX len 0 ver 0: seq ac62141f8dc7e261 written 24 min_key POS_MIN durability: 1 ptr: 0:26:0 gen 0
==================================================================
BUG: KASAN: slab-use-after-free in bch2_btree_node_read_done+0x4737/0x5330
Read of size 8 at addr ffff88806d720010 by task syz.6.7/14359

CPU: 2 UID: 0 PID: 14359 Comm: syz.6.7 Not tainted 6.15.0-rc6 #1 PREEMPT(full) 
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
Call Trace:
 <TASK>
 dump_stack_lvl+0x116/0x1b0
 print_report+0xc1/0x630
 kasan_report+0x96/0xd0
 bch2_btree_node_read_done+0x4737/0x5330
 btree_node_read_work+0xb34/0x1e10
 bch2_btree_node_read+0x7c8/0x1020
 bch2_btree_root_read+0x2c3/0x460
 bch2_fs_recovery+0x294e/0x59a0
 bch2_fs_start+0x6e0/0xd20
 bch2_fs_get_tree+0x4c6/0x2140
 vfs_get_tree+0x93/0x340
 path_mount+0x1270/0x1b90
 do_mount+0xb3/0x110
 __x64_sys_mount+0x193/0x230
 do_syscall_64+0xcf/0x260
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f9d6f3af51e
Code: ff ff ff 64 c7 00 16 00 00 00 b8 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 f3 0f 1e fa 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f9d701709b8 EFLAGS: 00000202 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 0000000000005943 RCX: 00007f9d6f3af51e
RDX: 00000000200000c0 RSI: 0000000020000180 RDI: 00007f9d70170a10
RBP: 00007f9d70170a50 R08: 00007f9d70170a50 R09: 0000000000000010
R10: 0000000000000010 R11: 0000000000000202 R12: 00000000200000c0
R13: 0000000020000180 R14: 00007f9d70170a10 R15: 0000000020000480
 </TASK>

Allocated by task 14334:
 kasan_save_stack+0x24/0x50
 kasan_save_track+0x14/0x30
 __kasan_slab_alloc+0x87/0x90
 kmem_cache_alloc_lru_noprof+0x165/0x4a0
 xas_alloc+0x361/0x480
 xas_create+0x3d7/0x1530
 xas_store+0x92/0x1840
 shmem_add_to_page_cache+0x663/0xa30
 shmem_alloc_and_add_folio+0x454/0xbb0
 shmem_get_folio_gfp+0x5a2/0x1530
 shmem_write_begin+0x156/0x310
 generic_perform_write+0x3df/0x8c0
 shmem_file_write_iter+0x111/0x140
 vfs_write+0xba0/0x1100
 ksys_write+0x121/0x240
 do_syscall_64+0xcf/0x260
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

Freed by task 28:
 kasan_save_stack+0x24/0x50
 kasan_save_track+0x14/0x30
 kasan_save_free_info+0x3a/0x60
 __kasan_slab_free+0x54/0x70
 kmem_cache_free+0x14d/0x550
 rcu_core+0x7a4/0x1660
 handle_softirqs+0x1be/0x850
 run_ksoftirqd+0x3a/0x60
 smpboot_thread_fn+0x3d2/0xaa0
 kthread+0x447/0x8a0
 ret_from_fork+0x48/0x80
 ret_from_fork_asm+0x1a/0x30

Last potentially related work creation:
 kasan_save_stack+0x24/0x50
 kasan_record_aux_stack+0xb0/0xc0
 __call_rcu_common.constprop.0+0x99/0x820
 xas_store+0xaf9/0x1840
 __filemap_remove_folio+0x417/0x780
 filemap_remove_folio+0xc7/0x210
 truncate_inode_folio+0x4c/0x70
 shmem_undo_range+0x357/0x11b0
 shmem_truncate_range+0x30/0xd0
 shmem_evict_inode+0x2ea/0xa00
 evict+0x3db/0x830
 iput+0x513/0x820
 dentry_unlink_inode+0x2cd/0x4c0
 __dentry_kill+0x186/0x5b0
 dput.part.0+0x49e/0x990
 dput+0x1f/0x30
 __fput+0x515/0xb40
 fput_close_sync+0x10f/0x210
 __x64_sys_close+0x8f/0x120
 do_syscall_64+0xcf/0x260
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

The buggy address belongs to the object at ffff88806d720000
 which belongs to the cache radix_tree_node of size 576
The buggy address is located 16 bytes inside of
 freed 576-byte region [ffff88806d720000, ffff88806d720240)

The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x6d720
head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
memcg:ffff8880756c8b01
flags: 0x4fff00000000040(head|node=1|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 04fff00000000040 ffff88801b44cb40 dead000000000100 dead000000000122
raw: 0000000000000000 0000000000170017 00000000f5000000 ffff8880756c8b01
head: 04fff00000000040 ffff88801b44cb40 dead000000000100 dead000000000122
head: 0000000000000000 0000000000170017 00000000f5000000 ffff8880756c8b01
head: 04fff00000000002 ffffea0001b5c801 00000000ffffffff 00000000ffffffff
head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000004
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 2, migratetype Reclaimable, gfp_mask 0x52830(GFP_ATOMIC|__GFP_RECLAIMABLE|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP), pid 86, tgid 86 (kworker/u19:2), ts 144458042338, free_ts 0
 prep_new_page+0x1b0/0x1e0
 get_page_from_freelist+0x1c80/0x3a40
 __alloc_frozen_pages_noprof+0x2fd/0x6d0
 alloc_pages_mpol+0x209/0x550
 new_slab+0x254/0x350
 ___slab_alloc+0xf0c/0x17c0
 __slab_alloc.isra.0+0x56/0xb0
 kmem_cache_alloc_noprof+0x273/0x4a0
 radix_tree_node_alloc.constprop.0+0x1e8/0x350
 idr_get_free+0x568/0xab0
 idr_alloc_u32+0x173/0x2d0
 idr_alloc_cyclic+0x105/0x230
 alloc_pid+0x56c/0x1200
 copy_process+0x295d/0x77f0
 kernel_clone+0xea/0xee0
 user_mode_thread+0xc5/0x110
page_owner free stack trace missing

Memory state around the buggy address:
 ffff88806d71ff00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 ffff88806d71ff80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffff88806d720000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                         ^
 ffff88806d720080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
 ffff88806d720100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
thanks,
Kun Hu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ