lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID:
	<TYSPR06MB7158089CBC192CADDD09D323F69EA@TYSPR06MB7158.apcprd06.prod.outlook.com>
Date: Wed, 21 May 2025 10:24:43 +0000
From: "huk23@...udan.edu.cn" <huk23@...udan.edu.cn>
To: Dave Kleikamp <shaggy@...nel.org>
CC: syzkaller <syzkaller@...glegroups.com>, linux-kernel
	<linux-kernel@...r.kernel.org>, Jiaji Qin <jjtan24@...udan.edu.cn>, Shuoran
 Bai <baishuoran@...eu.edu.cn>
Subject: KASAN: slab-use-after-free Write in diWrite

Dear Maintainers,



When using our customized Syzkaller to fuzz the latest Linux kernel, the following crash (101th)was triggered.


HEAD commit: 6537cfb395f352782918d8ee7b7f10ba2cc3cbf2
git tree: upstream
Output:https:https://github.com/pghk13/Kernel-Bug/blob/main/0520_6.15-rc6/101_KASAN%3A%20slab-use-after-free%20Write%20in%20diWrite/101report.txt
Kernel config:https://github.com/pghk13/Kernel-Bug/blob/main/0520_6.15-rc6/config.txt
C reproducer:https://github.com/pghk13/Kernel-Bug/blob/main/0520_6.15-rc6/101_KASAN%3A%20slab-use-after-free%20Write%20in%20diWrite/101repro.c
Syzlang reproducer:https://github.com/pghk13/Kernel-Bug/blob/main/0520_6.15-rc6/101_KASAN%3A%20slab-use-after-free%20Write%20in%20diWrite/101repro.txt



This is a slab-use-after-free bug in the JFS filesystem driver, inside the diWrite function. When JFS is mounted over loop device and the backend storage of that loop device is changed via the LOOP_SET_FD ioctl, JFS may not correctly invalidate its internal cache.
triggering procedure: JFS filesystem is mounted over a loop device. The backend file of that loop device is changed and operations (such as jfs_readdir triggered by the getents64 system call here, which further calls txCommit and diWrite) on the JFS filesystem try to write back inode data. The diWrite function tries to write a memory address that it thinks is still valid, but that address belongs to a slab object that has been freed (by some ext4 related operations) and possibly reallocated.


We have reproduced this issue several times on 6.15-rc6 again.


This is the URL of the 2024 syzbot report of this bug:https://groups.google.com/g/syzkaller-lts-bugs/c/CVD1uqZnFPA/m/P4-Bi8BmAwAJ

If you fix this issue, please add the following tag to the commit:
Reported-by: Kun Hu <huk23@...udan.edu.cn>, Jiaji Qin <jjtan24@...udan.edu.cn>, Shuoran Bai <baishuoran@...eu.edu.cn>


loop4: detected capacity change from 0 to 32768
==================================================================
BUG: KASAN: slab-use-after-free in diWrite+0xeb0/0x1930
Write of size 32 at addr ffff888052f700c0 by task syz.4.12/14401

CPU: 2 UID: 0 PID: 14401 Comm: syz.4.12 Not tainted 6.15.0-rc6 #1 PREEMPT(full)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
Call Trace:
 <TASK>
 dump_stack_lvl+0x116/0x1b0
 print_report+0xc1/0x630
 kasan_report+0x96/0xd0
 kasan_check_range+0xed/0x1a0
 __asan_memcpy+0x3d/0x60
 diWrite+0xeb0/0x1930
 txCommit+0x6c2/0x4720
 jfs_readdir+0x2afa/0x44a0
 wrap_directory_iterator+0xa1/0xe0
 iterate_dir+0x2a5/0xab0
 __x64_sys_getdents64+0x153/0x2e0
 do_syscall_64+0xcf/0x260
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb8ca9acadd
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fb8c87f5ba8 EFLAGS: 00000246 ORIG_RAX: 00000000000000d9
RAX: ffffffffffffffda RBX: 00007fb8caba5fa0 RCX: 00007fb8ca9acadd
RDX: 000000000000009e RSI: 0000000020000280 RDI: 0000000000000005
RBP: 00007fb8caa2ab8f R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fb8caba5fac R14: 00007fb8caba6038 R15: 00007fb8c87f5d40
 </TASK>

Allocated by task 84:
 kasan_save_stack+0x24/0x50
 kasan_save_track+0x14/0x30
 __kasan_slab_alloc+0x87/0x90
 kmem_cache_alloc_noprof+0x166/0x4a0
 mempool_alloc_noprof+0x159/0x360
 bvec_alloc+0x171/0x1e0
 bio_alloc_bioset+0x4aa/0x920
 ext4_bio_write_folio+0xcb6/0x1a90
 mpage_submit_folio+0x1bf/0x350
 mpage_process_page_bufs+0x6cc/0x870
 mpage_prepare_extent_to_map+0x75c/0x1360
 ext4_do_writepages+0xc96/0x35c0
 ext4_writepages+0x371/0x7b0
 do_writepages+0x1ac/0x810
 __writeback_single_inode+0x12e/0xf50
 writeback_sb_inodes+0x5f5/0xee0
 __writeback_inodes_wb+0xbe/0x270
 wb_writeback+0x728/0xb50
 wb_workfn+0x96e/0xe90
 process_scheduled_works+0x5de/0x1bd0
 worker_thread+0x5a9/0xd10
 kthread+0x447/0x8a0
 ret_from_fork+0x48/0x80
 ret_from_fork_asm+0x1a/0x30

Freed by task 0:
 kasan_save_stack+0x24/0x50
 kasan_save_track+0x14/0x30
 kasan_save_free_info+0x3a/0x60
 __kasan_slab_free+0x54/0x70
 kmem_cache_free+0x14d/0x550
 mempool_free+0xe9/0x3a0
 bvec_free+0xbd/0xf0
 bio_free+0xaa/0x130
 bio_put+0x35c/0x590
 ext4_end_bio+0x45b/0x6e0
 bio_endio+0x795/0xab0
 blk_update_request+0x5b6/0x17a0
 scsi_end_request+0x7a/0x7c0
 scsi_io_completion+0x17b/0x1560
 scsi_complete+0x12a/0x260
 blk_complete_reqs+0xb2/0xf0
 handle_softirqs+0x1be/0x850
 irq_exit_rcu+0xfd/0x150
 sysvec_call_function_single+0xde/0x100
 asm_sysvec_call_function_single+0x1a/0x20

The buggy address belongs to the object at ffff888052f70000
 which belongs to the cache biovec-max of size 4096
The buggy address is located 192 bytes inside of
 freed 4096-byte region [ffff888052f70000, ffff888052f71000)

The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0xffff888052f70000 pfn:0x52f70
head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0x4fff00000000240(workingset|head|node=1|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 04fff00000000240 ffff888041346a00 ffffea0001197810 ffffea00011c6010
raw: ffff888052f70000 0000000000070000 00000000f5000000 0000000000000000
head: 04fff00000000240 ffff888041346a00 ffffea0001197810 ffffea00011c6010
head: ffff888052f70000 0000000000070000 00000000f5000000 0000000000000000
head: 04fff00000000003 ffffea00014bdc01 00000000ffffffff 00000000ffffffff
head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000008
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd2800(GFP_NOWAIT|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 92, tgid 92 (kworker/u19:1), ts 99474904265, free_ts 99384964285
 prep_new_page+0x1b0/0x1e0
 get_page_from_freelist+0x1c80/0x3a40
 __alloc_frozen_pages_noprof+0x2fd/0x6d0
 alloc_pages_mpol+0x209/0x550
 new_slab+0x254/0x350
 ___slab_alloc+0xf0c/0x17c0
 __slab_alloc.isra.0+0x56/0xb0
 kmem_cache_alloc_noprof+0x273/0x4a0
 mempool_alloc_noprof+0x159/0x360
 bvec_alloc+0x171/0x1e0
 bio_alloc_bioset+0x4aa/0x920
 ext4_bio_write_folio+0xcb6/0x1a90
 mpage_submit_folio+0x1bf/0x350
 mpage_process_page_bufs+0x6cc/0x870
 mpage_prepare_extent_to_map+0x75c/0x1360
 ext4_do_writepages+0xc96/0x35c0
page last free pid 9470 tgid 9470 stack trace:
 __free_frozen_pages+0x7cd/0x1320
 __put_partials+0x14c/0x170
 qlist_free_all+0x50/0x130
 kasan_quarantine_reduce+0x168/0x1c0
 __kasan_slab_alloc+0x67/0x90
 kmem_cache_alloc_noprof+0x166/0x4a0
 vm_area_alloc+0x20/0x170
 do_brk_flags+0x293/0x13a0
 vm_brk_flags+0x3a8/0x5f0
 elf_load+0x3eb/0x760
 load_elf_binary+0x34ed/0x50c0
 bprm_execve+0x8df/0x1630
 do_execveat_common.isra.0+0x4ce/0x630
 __x64_sys_execve+0x8e/0xb0
 do_syscall_64+0xcf/0x260
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

Memory state around the buggy address:
 ffff888052f6ff80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 ffff888052f70000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff888052f70080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                                           ^
 ffff888052f70100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
 ffff888052f70180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
thanks,
Kun Hu


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ