[<prev] [next>] [day] [month] [year] [list]
Message-ID: <4d0162b9.1aea8.19436dd812f.Coremail.stitch@zju.edu.cn>
Date: Sun, 5 Jan 2025 22:27:53 +0800 (GMT+08:00)
From: "Jiacheng Xu" <stitch@....edu.cn>
To: maarten.lankhorst@...ux.intel.com, linux-kernel@...r.kernel.org,
mripard@...nel.org, tzimmermann@...e.de, airlied@...il.com,
simona@...ll.ch
Cc: syzkaller@...glegroups.com
Subject: [BUG] KASAN: slab-use-after-free in
drm_atomic_connector_get_property
Hi developers:
We are reporting a Linux issue using a modified version of Syzkaller.
HEAD commit: 4bbf9020 6.13.0-rc4
git tree: upstream
kernel config: https://github.com/google/syzkaller/blob/master/dashboard/config/linux/upstream-apparmor-kasan.config
However, I cannot provide you with a reproducing program. I hope that the call stack information in the crash log can help you locate the problem.
The KASAN Report indicates a Use-After-Free in `drm_atomic_connector_get_property`, triggered by accessing a freed object via `state->hdr_output_metadata->base.id`.
Triggering Path:
Memory for the object (state) is allocated in drm_atomic_helper_crtc_duplicate_state.
The memory is later released in drm_atomic_state_default_clear, specifically in the call to connector->funcs->atomic_destroy_state.
==================================================================
BUG: KASAN: slab-use-after-free in drm_atomic_connector_get_property+0x1304/0x1830 drivers/gpu/drm/drm_atomic_uapi.c:806
Read of size 1 at addr ffff8880434e2d2e by task syz-executor.2/71131
CPU: 1 UID: 0 PID: 71131 Comm: syz-executor.2 Not tainted 6.13.0-rc4 #2
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
Sched_ext: serialise (enabled+all), task: runnable_at=+0ms
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x229/0x350 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0x164/0x530 mm/kasan/report.c:489
kasan_report+0x147/0x180 mm/kasan/report.c:602
drm_atomic_connector_get_property+0x1304/0x1830 drivers/gpu/drm/drm_atomic_uapi.c:806
drm_mode_object_get_properties+0x241/0x660 drivers/gpu/drm/drm_mode_object.c:403
drm_mode_getconnector+0x1351/0x17f0 drivers/gpu/drm/drm_connector.c:3272
drm_ioctl_kernel+0x388/0x490 drivers/gpu/drm/drm_ioctl.c:796
drm_ioctl+0x768/0xc50 drivers/gpu/drm/drm_ioctl.c:893
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:906 [inline]
__se_sys_ioctl+0x266/0x350 fs/ioctl.c:892
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf6/0x210 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f64a18903ad
Code: c3 e8 a7 2b 00 00 0f 1f 80 00 00 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f64a250c0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f64a19cbf80 RCX: 00007f64a18903ad
RDX: 0000000020000480 RSI: 00000000c05064a7 RDI: 0000000000000003
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f64a250c640
R13: 000000000000000e R14: 00007f64a184fc90 R15: 00007f64a2504000
</TASK>
Allocated by task 5228:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
poison_kmalloc_redzone mm/kasan/common.c:377 [inline]
__kasan_kmalloc+0x89/0xa0 mm/kasan/common.c:394
kasan_kmalloc include/linux/kasan.h:260 [inline]
__kmalloc_cache_noprof+0x238/0x3d0 mm/slub.c:4329
kmalloc_noprof include/linux/slab.h:901 [inline]
drm_atomic_helper_crtc_duplicate_state+0x8a/0xd0 drivers/gpu/drm/drm_atomic_state_helper.c:177
drm_atomic_get_crtc_state+0x1d1/0x4f0 drivers/gpu/drm/drm_atomic.c:360
drm_atomic_get_plane_state+0x5be/0x680 drivers/gpu/drm/drm_atomic.c:561
drm_atomic_helper_dirtyfb+0x686/0x890 drivers/gpu/drm/drm_damage_helper.c:171
drm_fbdev_shmem_helper_fb_dirty+0x1ad/0x330 drivers/gpu/drm/drm_fbdev_shmem.c:117
drm_fb_helper_fb_dirty drivers/gpu/drm/drm_fb_helper.c:376 [inline]
drm_fb_helper_damage_work+0x2cb/0x850 drivers/gpu/drm/drm_fb_helper.c:399
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa96/0x18f0 kernel/workqueue.c:3310
worker_thread+0x8a9/0xd80 kernel/workqueue.c:3391
kthread+0x2c3/0x360 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
Freed by task 5228:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:582
poison_slab_object mm/kasan/common.c:247 [inline]
__kasan_slab_free+0x5a/0x70 mm/kasan/common.c:264
kasan_slab_free include/linux/kasan.h:233 [inline]
slab_free_hook mm/slub.c:2353 [inline]
slab_free mm/slub.c:4613 [inline]
kfree+0x196/0x450 mm/slub.c:4761
drm_atomic_state_default_clear+0x523/0xf80 drivers/gpu/drm/drm_atomic.c:224
drm_atomic_state_clear drivers/gpu/drm/drm_atomic.c:293 [inline]
__drm_atomic_state_free+0xdc/0x290 drivers/gpu/drm/drm_atomic.c:310
kref_put include/linux/kref.h:65 [inline]
drm_atomic_state_put include/drm/drm_atomic.h:538 [inline]
drm_atomic_helper_dirtyfb+0x7ed/0x890 drivers/gpu/drm/drm_damage_helper.c:193
drm_fbdev_shmem_helper_fb_dirty+0x1ad/0x330 drivers/gpu/drm/drm_fbdev_shmem.c:117
drm_fb_helper_fb_dirty drivers/gpu/drm/drm_fb_helper.c:376 [inline]
drm_fb_helper_damage_work+0x2cb/0x850 drivers/gpu/drm/drm_fb_helper.c:399
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa96/0x18f0 kernel/workqueue.c:3310
worker_thread+0x8a9/0xd80 kernel/workqueue.c:3391
kthread+0x2c3/0x360 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
The buggy address belongs to the object at ffff8880434e2c00
which belongs to the cache kmalloc-512 of size 512
The buggy address is located 302 bytes inside of
freed 512-byte region [ffff8880434e2c00, ffff8880434e2e00)
The buggy address belongs to the physical page:
page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x434e0
head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
ksm flags: 0x4fff00000000040(head|node=1|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 04fff00000000040 ffff88801d441c80 ffffea00016a9f00 dead000000000003
raw: 0000000000000000 0000000000100010 00000001f5000000 0000000000000000
head: 04fff00000000040 ffff88801d441c80 ffffea00016a9f00 dead000000000003
head: 0000000000000000 0000000000100010 00000001f5000000 0000000000000000
head: 04fff00000000002 ffffea00010d3801 ffffffffffffffff 0000000000000000
head: ffff888000000004 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 2, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 5230, tgid 5230 (systemd-udevd), ts 380588817989, free_ts 378738152560
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x1f6/0x240 mm/page_alloc.c:1558
prep_new_page mm/page_alloc.c:1566 [inline]
get_page_from_freelist+0x3586/0x36d0 mm/page_alloc.c:3476
__alloc_pages_noprof+0x260/0x680 mm/page_alloc.c:4753
alloc_pages_mpol_noprof+0x3c8/0x650 mm/mempolicy.c:2269
alloc_slab_page+0x6a/0x110 mm/slub.c:2423
allocate_slab+0x5f/0x2b0 mm/slub.c:2589
new_slab mm/slub.c:2642 [inline]
___slab_alloc+0xbdf/0x1490 mm/slub.c:3830
__slab_alloc mm/slub.c:3920 [inline]
__slab_alloc_node mm/slub.c:3995 [inline]
slab_alloc_node mm/slub.c:4156 [inline]
__kmalloc_cache_noprof+0x29b/0x3d0 mm/slub.c:4324
kmalloc_noprof include/linux/slab.h:901 [inline]
kzalloc_noprof include/linux/slab.h:1037 [inline]
kernfs_fop_open+0x48e/0xfa0 fs/kernfs/file.c:623
do_dentry_open+0xc7c/0x1b00 fs/open.c:945
vfs_open+0x31/0xc0 fs/open.c:1075
do_open fs/namei.c:3828 [inline]
path_openat+0x2b63/0x3870 fs/namei.c:3987
do_filp_open+0xe9/0x1c0 fs/namei.c:4014
do_sys_openat2+0x135/0x1d0 fs/open.c:1402
do_sys_open fs/open.c:1417 [inline]
__do_sys_openat fs/open.c:1433 [inline]
__se_sys_openat fs/open.c:1428 [inline]
__x64_sys_openat+0x15d/0x1c0 fs/open.c:1428
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf6/0x210 arch/x86/entry/common.c:83
page last free pid 9 tgid 9 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1127 [inline]
free_unref_page+0xe32/0x1100 mm/page_alloc.c:2659
vfree+0x1c9/0x360 mm/vmalloc.c:3383
delayed_vfree_work+0x55/0x80 mm/vmalloc.c:3303
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa96/0x18f0 kernel/workqueue.c:3310
worker_thread+0x8a9/0xd80 kernel/workqueue.c:3391
kthread+0x2c3/0x360 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
Memory state around the buggy address:
ffff8880434e2c00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8880434e2c80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff8880434e2d00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff8880434e2d80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8880434e2e00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================
Powered by blists - more mailing lists