[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <746aed.1562c.1981cd4e43c.Coremail.baishuoran@hrbeu.edu.cn>
Date: Fri, 18 Jul 2025 17:19:30 +0800 (GMT+08:00)
From: 白烁冉 <baishuoran@...eu.edu.cn>
To: "Andrey Ryabinin" <ryabinin.a.a@...il.com>,
"Andrew Morton" <akpm@...ux-foundation.org>
Cc: "Kun Hu" <huk23@...udan.edu.cn>, "Jiaji Qin" <jjtan24@...udan.edu.cn>,
"Alexander Potapenko" <glider@...gle.com>,
"Andrey Konovalov" <andreyknvl@...il.com>,
"Dmitry Vyukov" <dvyukov@...gle.com>,
"Vincenzo Frascino" <vincenzo.frascino@....com>,
kasan-dev@...glegroups.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: KASAN: out-of-bounds in __asan_memcpy
Dear Maintainers,
When using our customized Syzkaller to fuzz the latest Linux kernel, the following crash was triggered.
HEAD commit: 6537cfb395f352782918d8ee7b7f10ba2cc3cbf2
git tree: upstream
Output: https://github.com/pghk13/Kernel-Bug/blob/main/0702_6.14/KASAN%3A%20out-of-bounds%20in%20__asan_memcpy/11_report.txt
Kernel config: https://github.com/pghk13/Kernel-Bug/blob/main/0219_6.13rc7_todo/config.txt
C reproducer:https://github.com/pghk13/Kernel-Bug/blob/main/0702_6.14/KASAN%3A%20out-of-bounds%20in%20__asan_memcpy/11_repro.c
Syzlang reproducer: https://github.com/pghk13/Kernel-Bug/blob/main/0702_6.14/KASAN%3A%20out-of-bounds%20in%20__asan_memcpy/11_repro.txt
The error occurs around line 105 of the function, possibly during the second kasan_check_range call, which checks the target address dest: it may be due to dest + len exceeding the allocated memory boundary, dest pointing to freed memory (use-after-free), or the len parameter being too large, causing the target address range to exceed the valid area.
We have reproduced this issue several times on 6.14 again.
If you fix this issue, please add the following tag to the commit:
Reported-by: Kun Hu <huk23@...udan.edu.cn>, Jiaji Qin <jjtan24@...udan.edu.cn>, Shuoran Bai <baishuoran@...eu.edu.cn>
==================================================================
[ 347.632078][T15036] Kernel panic - not syncing: KASAN: panic_on_warn set ...
[ 347.634330][T15036] CPU: 1 UID: 0 PID: 15036 Comm: syz.1.17 Not tainted 6.14.0 #1
[ 347.634672][T15036] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
[ 347.634672][T15036] Call Trace:
[ 347.634672][T15036] <TASK>
[ 347.634672][T15036] dump_stack_lvl+0x3d/0x1b0
[ 347.634672][T15036] panic+0x70b/0x7c0
[ 347.634672][T15036] ? __pfx_panic+0x10/0x10
[ 347.634672][T15036] ? irqentry_exit+0x3b/0x90
[ 347.634672][T15036] ? srso_alias_return_thunk+0x5/0xfbef5
[ 347.634672][T15036] ? preempt_schedule_thunk+0x1a/0x30
[ 347.634672][T15036] ? srso_alias_return_thunk+0x5/0xfbef5
[ 347.634672][T15036] ? preempt_schedule_common+0x49/0xc0
[ 347.634672][T15036] ? check_panic_on_warn+0x1f/0xc0
[ 347.634672][T15036] ? diWrite+0xec1/0x1970
[ 347.634672][T15036] check_panic_on_warn+0xb1/0xc0
[ 347.634672][T15036] end_report+0x117/0x180
[ 347.634672][T15036] kasan_report+0xa1/0xc0
[ 347.634672][T15036] ? diWrite+0xec1/0x1970
[ 347.634672][T15036] kasan_check_range+0xed/0x1a0
[ 347.634672][T15036] __asan_memcpy+0x3d/0x60
[ 347.634672][T15036] diWrite+0xec1/0x1970
[ 347.634672][T15036] ? srso_alias_return_thunk+0x5/0xfbef5
[ 347.634672][T15036] txCommit+0x6bb/0x46f0
[ 347.634672][T15036] ? __sanitizer_cov_trace_pc+0x20/0x50
[ 347.634672][T15036] ? srso_alias_return_thunk+0x5/0xfbef5
[ 347.673063][T15036] ? __pfx_add_index+0x10/0x10
[ 347.673063][T15036] ? __pfx_txCommit+0x10/0x10
[ 347.673063][T15036] ? lmWriteRecord+0x1102/0x11f0
[ 347.673063][T15036] ? srso_alias_return_thunk+0x5/0xfbef5
[ 347.673063][T15036] ? write_comp_data+0x29/0x80
[ 347.673063][T15036] ? srso_alias_return_thunk+0x5/0xfbef5
[ 347.673063][T15036] ? __mark_inode_dirty+0x2a4/0xe70
[ 347.673063][T15036] ? __sanitizer_cov_trace_pc+0x20/0x50
[ 347.673063][T15036] jfs_readdir+0x2959/0x42d0
[ 347.673063][T15036] ? __pfx_jfs_readdir+0x10/0x10
[ 347.673063][T15036] ? srso_alias_return_thunk+0x5/0xfbef5
[ 347.673063][T15036] ? srso_alias_return_thunk+0x5/0xfbef5
[ 347.673063][T15036] ? __pfx_jfs_readdir+0x10/0x10
[ 347.673063][T15036] ? srso_alias_return_thunk+0x5/0xfbef5
[ 347.673063][T15036] ? srso_alias_return_thunk+0x5/0xfbef5
[ 347.673063][T15036] ? srso_alias_return_thunk+0x5/0xfbef5
[ 347.673063][T15036] ? down_write+0x14e/0x200
[ 347.673063][T15036] ? __pfx_down_write+0x10/0x10
[ 347.673063][T15036] ? write_comp_data+0x29/0x80
[ 347.673063][T15036] ? __pfx_down_read_killable+0x10/0x10
[ 347.673063][T15036] ? __pfx_jfs_readdir+0x10/0x10
[ 347.673063][T15036] wrap_directory_iterator+0xa1/0xe0
[ 347.673063][T15036] iterate_dir+0x2a7/0xaf0
[ 347.673063][T15036] ? __sanitizer_cov_trace_pc+0x20/0x50
[ 347.673063][T15036] ? srso_alias_return_thunk+0x5/0xfbef5
[ 347.673063][T15036] __x64_sys_getdents64+0x154/0x2e0
[ 347.673063][T15036] ? __x64_sys_futex+0x1d3/0x4d0
[ 347.673063][T15036] ? __pfx___x64_sys_getdents64+0x10/0x10
[ 347.673063][T15036] ? srso_alias_return_thunk+0x5/0xfbef5
[ 347.673063][T15036] ? __sanitizer_cov_trace_pc+0x20/0x50
[ 347.673063][T15036] ? __pfx_filldir64+0x10/0x10
[ 347.673063][T15036] ? do_syscall_64+0x95/0x250
[ 347.673063][T15036] do_syscall_64+0xcf/0x250
[ 347.673063][T15036] entry_SYSCALL_64_after_hwframe+0x77/0x7f
[ 347.673063][T15036] RIP: 0033:0x7f4d361acadd
[ 347.673063][T15036] Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
[ 347.673063][T15036] RSP: 002b:00007f4d36f01ba8 EFLAGS: 00000246 ORIG_RAX: 00000000000000d9
[ 347.673063][T15036] RAX: ffffffffffffffda RBX: 00007f4d363a5fa0 RCX: 00007f4d361acadd
[ 347.673063][T15036] RDX: 000000000000005d RSI: 00000000200002c0 RDI: 0000000000000005
[ 347.673063][T15036] RBP: 00007f4d3622ab8f R08: 0000000000000000 R09: 0000000000000000
[ 347.673063][T15036] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[ 347.673063][T15036] R13: 00007f4d363a5fac R14: 00007f4d363a6038 R15: 00007f4d36f01d40
------------------------------
thanks,
Kun Hu
Powered by blists - more mailing lists