[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANp29Y700diEaeHd6bHksAL_60D+vJD-95EqcveqMME0smNJnw@mail.gmail.com>
Date: Thu, 10 Jul 2025 10:02:59 +0200
From: Aleksandr Nogikh <nogikh@...gle.com>
To: Dave Chinner <david@...morbit.com>
Cc: syzbot <syzbot+3470c9ffee63e4abafeb@...kaller.appspotmail.com>, cem@...nel.org,
linux-kernel@...r.kernel.org, linux-xfs@...r.kernel.org,
syzkaller-bugs@...glegroups.com, kasan-dev <kasan-dev@...glegroups.com>
Subject: Re: [syzbot] [xfs?] possible deadlock in xfs_ilock_attr_map_shared (2)
Hi Dave,
On Thu, Jul 10, 2025 at 12:13 AM 'Dave Chinner' via syzkaller-bugs
<syzkaller-bugs@...glegroups.com> wrote:
>
> On Wed, Jul 09, 2025 at 10:39:29AM -0700, syzbot wrote:
> > Hello,
> >
> > syzbot found the following issue on:
> >
> > HEAD commit: 733923397fd9 Merge tag 'pwm/for-6.16-rc6-fixes' of git://g..
> > git tree: upstream
> > console output: https://syzkaller.appspot.com/x/log.txt?x=13f53582580000
> > kernel config: https://syzkaller.appspot.com/x/.config?x=b309c907eaab29da
> > dashboard link: https://syzkaller.appspot.com/bug?extid=3470c9ffee63e4abafeb
> > compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
> >
> > Unfortunately, I don't have any reproducer for this issue yet.
> >
> > Downloadable assets:
> > disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/d900f083ada3/non_bootable_disk-73392339.raw.xz
> > vmlinux: https://storage.googleapis.com/syzbot-assets/be7feaa77b8c/vmlinux-73392339.xz
> > kernel image: https://storage.googleapis.com/syzbot-assets/a663b3e31463/bzImage-73392339.xz
> >
> > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > Reported-by: syzbot+3470c9ffee63e4abafeb@...kaller.appspotmail.com
> >
> > ======================================================
> > WARNING: possible circular locking dependency detected
> > 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0 Not tainted
> > ------------------------------------------------------
> > syz.0.0/5339 is trying to acquire lock:
> > ffffffff8e247500 (fs_reclaim){+.+.}-{0:0}, at: might_alloc include/linux/sched/mm.h:318 [inline]
> > ffffffff8e247500 (fs_reclaim){+.+.}-{0:0}, at: prepare_alloc_pages+0x153/0x610 mm/page_alloc.c:4727
> >
> > but task is already holding lock:
> > ffff888053415098 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_ilock_attr_map_shared+0x92/0xd0 fs/xfs/xfs_inode.c:85
> >
> > which lock already depends on the new lock.
> >
> >
> > the existing dependency chain (in reverse order) is:
> >
> > -> #1 (&xfs_nondir_ilock_class){++++}-{4:4}:
> > lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
> > down_write_nested+0x9d/0x200 kernel/locking/rwsem.c:1693
> > xfs_reclaim_inode fs/xfs/xfs_icache.c:1045 [inline]
> > xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1737 [inline]
> > xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1819
> > xfs_icwalk fs/xfs/xfs_icache.c:1867 [inline]
> > xfs_reclaim_inodes_nr+0x1e3/0x260 fs/xfs/xfs_icache.c:1111
> > super_cache_scan+0x41b/0x4b0 fs/super.c:228
> > do_shrink_slab+0x6ec/0x1110 mm/shrinker.c:437
> > shrink_slab+0xd74/0x10d0 mm/shrinker.c:664
> > shrink_one+0x28a/0x7c0 mm/vmscan.c:4939
> > shrink_many mm/vmscan.c:5000 [inline]
> > lru_gen_shrink_node mm/vmscan.c:5078 [inline]
> > shrink_node+0x314e/0x3760 mm/vmscan.c:6060
> > kswapd_shrink_node mm/vmscan.c:6911 [inline]
> > balance_pgdat mm/vmscan.c:7094 [inline]
> > kswapd+0x147c/0x2830 mm/vmscan.c:7359
> > kthread+0x70e/0x8a0 kernel/kthread.c:464
> > ret_from_fork+0x3f9/0x770 arch/x86/kernel/process.c:148
> > ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
> >
> > -> #0 (fs_reclaim){+.+.}-{0:0}:
> > check_prev_add kernel/locking/lockdep.c:3168 [inline]
> > check_prevs_add kernel/locking/lockdep.c:3287 [inline]
> > validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3911
> > __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5240
> > lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
> > __fs_reclaim_acquire mm/page_alloc.c:4045 [inline]
> > fs_reclaim_acquire+0x72/0x100 mm/page_alloc.c:4059
> > might_alloc include/linux/sched/mm.h:318 [inline]
> > prepare_alloc_pages+0x153/0x610 mm/page_alloc.c:4727
> > __alloc_frozen_pages_noprof+0x123/0x370 mm/page_alloc.c:4948
> > alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2419
> > alloc_frozen_pages_noprof mm/mempolicy.c:2490 [inline]
> > alloc_pages_noprof+0xa9/0x190 mm/mempolicy.c:2510
> > get_free_pages_noprof+0xf/0x80 mm/page_alloc.c:5018
> > __kasan_populate_vmalloc mm/kasan/shadow.c:362 [inline]
> > kasan_populate_vmalloc+0x33/0x1a0 mm/kasan/shadow.c:417
> > alloc_vmap_area+0xd51/0x1490 mm/vmalloc.c:2084
> > __get_vm_area_node+0x1f8/0x300 mm/vmalloc.c:3179
> > __vmalloc_node_range_noprof+0x301/0x12f0 mm/vmalloc.c:3845
> > __vmalloc_node_noprof mm/vmalloc.c:3948 [inline]
> > __vmalloc_noprof+0xb1/0xf0 mm/vmalloc.c:3962
> > xfs_buf_alloc_backing_mem fs/xfs/xfs_buf.c:239 [inline]
>
> KASAN is still failing to pass through __GFP_NOLOCKDEP allocation
> context flags. It's also failing to pass through other important
> context restrictions like GFP_NOFS, GFP_NOIO, __GFP_NOFAIL, etc.
>
> Fundamentally, it's a bug to be doing nested GFP_KERNEL allocations
> inside an allocation context that has a more restricted allocation
> context...
>
> #syz set subsystems: kasan
Thanks for the analysis!
I've added the kasan-dev list to Cc.
--
Aleksandr
>
> --
> Dave Chinner
> david@...morbit.com
>
Powered by blists - more mailing lists