lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f13cda37-06a0-4281-87d1-042678a38a6b@lucifer.local>
Date: Tue, 15 Jul 2025 10:33:57 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: syzbot <syzbot+159a3ef1894076a6a6e9@...kaller.appspotmail.com>
Cc: Liam.Howlett@...cle.com, akpm@...ux-foundation.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        shakeel.butt@...ux.dev, surenb@...gle.com,
        syzkaller-bugs@...glegroups.com, vbabka@...e.cz
Subject: Re: [syzbot] [mm?] possible deadlock in lock_next_vma

So (as mentioned by others elsewhere also) this seems to all be a product of
ioctl()'s not being synchronised at all, and so when proc_maps_open() is called,
we set up the struct proc_maps_private structure.

Then in procfs_procmap_ioctl():

	struct seq_file *seq = file->private_data;
	struct proc_maps_private *priv = seq->private;

And that'll be the same proc_maps_private for all threads running ioctl's...

So both:

struct proc_maps_private {
	...
	bool mmap_locked;
	struct vm_area_struct *locked_vma;
	...
};

Fields are problematic here - these implicitly assume that it's one fd per
operation... and ioctl()'s make this not the case.

So you'll get the imbalanced VMA locking you're seeing here, as well as NULL
pointer derefs in particular because of:

static void unlock_vma(struct proc_maps_private *priv)
{
	if (priv->locked_vma) {
		vma_end_read(priv->locked_vma);
		priv->locked_vma = NULL;
	}
}

Which will just race on setting this field to NULL then something else touches
it and kaboom.

A stack I observed locally (the repro is super reproducible) was:

[access NULL vma->vm_mm -> boom]
vma_refcount_put()
unlock_vma()
get_next_vma()
query_vma_find_by_addr()
query_matching_vma()
do_procmap_query()

Racing with query_vma_teardown() it seemed.

So I think either we need to:

a. Acquire a lock before invoking do_procmap_query()
b. Find some other means of storing per-ioctl state.

The problem reported here afaict as a result relates only to "fs/proc/task_mmu:
execute PROCMAP_QUERY ioctl under per-vma locks".

Any issues that might/might not relate to the previous commit will have to be
considerated separately :P

On Fri, Jul 11, 2025 at 10:57:31PM -0700, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit:    26ffb3d6f02c Add linux-next specific files for 20250704
> git tree:       linux-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=12d4df70580000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=1e4f88512ae53408
> dashboard link: https://syzkaller.appspot.com/bug?extid=159a3ef1894076a6a6e9
> compiler:       Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/fd5569903143/disk-26ffb3d6.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/1b0c9505c543/vmlinux-26ffb3d6.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/9d864c72bed1/bzImage-26ffb3d6.xz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+159a3ef1894076a6a6e9@...kaller.appspotmail.com
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 6.16.0-rc4-next-20250704-syzkaller #0 Not tainted
> ------------------------------------------------------
> syz.4.1737/14243 is trying to acquire lock:
> ffff88807634d1e0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock_killable include/linux/mmap_lock.h:432 [inline]
> ffff88807634d1e0 (&mm->mmap_lock){++++}-{4:4}, at: lock_vma_under_mmap_lock mm/mmap_lock.c:189 [inline]
> ffff88807634d1e0 (&mm->mmap_lock){++++}-{4:4}, at: lock_next_vma+0x802/0xdc0 mm/mmap_lock.c:264
>
> but task is already holding lock:
> ffff888020b36a88 (vm_lock){++++}-{0:0}, at: lock_next_vma+0x146/0xdc0 mm/mmap_lock.c:220
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (vm_lock){++++}-{0:0}:
>        lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
>        __vma_enter_locked+0x182/0x380 mm/mmap_lock.c:63
>        __vma_start_write+0x1e/0x120 mm/mmap_lock.c:87
>        vma_start_write include/linux/mmap_lock.h:267 [inline]
>        mprotect_fixup+0x571/0x9b0 mm/mprotect.c:670
>        setup_arg_pages+0x53a/0xaa0 fs/exec.c:670
>        load_elf_binary+0xb9f/0x2730 fs/binfmt_elf.c:1013
>        search_binary_handler fs/exec.c:1670 [inline]
>        exec_binprm fs/exec.c:1702 [inline]
>        bprm_execve+0x99c/0x1450 fs/exec.c:1754
>        kernel_execve+0x8f0/0x9f0 fs/exec.c:1920
>        try_to_run_init_process+0x13/0x60 init/main.c:1397
>        kernel_init+0xad/0x1d0 init/main.c:1525
>        ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148
>        ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
>
> -> #0 (&mm->mmap_lock){++++}-{4:4}:
>        check_prev_add kernel/locking/lockdep.c:3168 [inline]
>        check_prevs_add kernel/locking/lockdep.c:3287 [inline]
>        validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3911
>        __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5240
>        lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
>        down_read_killable+0x50/0x350 kernel/locking/rwsem.c:1547
>        mmap_read_lock_killable include/linux/mmap_lock.h:432 [inline]
>        lock_vma_under_mmap_lock mm/mmap_lock.c:189 [inline]
>        lock_next_vma+0x802/0xdc0 mm/mmap_lock.c:264
>        get_next_vma fs/proc/task_mmu.c:182 [inline]
>        query_vma_find_by_addr fs/proc/task_mmu.c:516 [inline]
>        query_matching_vma+0x28f/0x4b0 fs/proc/task_mmu.c:545
>        do_procmap_query fs/proc/task_mmu.c:637 [inline]
>        procfs_procmap_ioctl+0x406/0xce0 fs/proc/task_mmu.c:748
>        vfs_ioctl fs/ioctl.c:51 [inline]
>        __do_sys_ioctl fs/ioctl.c:598 [inline]
>        __se_sys_ioctl+0xf9/0x170 fs/ioctl.c:584
>        do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
>        do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
>        entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> other info that might help us debug this:
>
>  Possible unsafe locking scenario:
>
>        CPU0                    CPU1
>        ----                    ----
>   rlock(vm_lock);
>                                lock(&mm->mmap_lock);
>                                lock(vm_lock);
>   rlock(&mm->mmap_lock);
>
>  *** DEADLOCK ***
>
> 2 locks held by syz.4.1737/14243:
>  #0: ffff888020b36e48 (vm_lock){++++}-{0:0}, at: lock_next_vma+0x146/0xdc0 mm/mmap_lock.c:220
>  #1: ffff888020b36a88 (vm_lock){++++}-{0:0}, at: lock_next_vma+0x146/0xdc0 mm/mmap_lock.c:220
>
> stack backtrace:
> CPU: 1 UID: 0 PID: 14243 Comm: syz.4.1737 Not tainted 6.16.0-rc4-next-20250704-syzkaller #0 PREEMPT(full)
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
> Call Trace:
>  <TASK>
>  dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
>  print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2046
>  check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2178
>  check_prev_add kernel/locking/lockdep.c:3168 [inline]
>  check_prevs_add kernel/locking/lockdep.c:3287 [inline]
>  validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3911
>  __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5240
>  lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
>  down_read_killable+0x50/0x350 kernel/locking/rwsem.c:1547
>  mmap_read_lock_killable include/linux/mmap_lock.h:432 [inline]
>  lock_vma_under_mmap_lock mm/mmap_lock.c:189 [inline]
>  lock_next_vma+0x802/0xdc0 mm/mmap_lock.c:264
>  get_next_vma fs/proc/task_mmu.c:182 [inline]
>  query_vma_find_by_addr fs/proc/task_mmu.c:516 [inline]
>  query_matching_vma+0x28f/0x4b0 fs/proc/task_mmu.c:545
>  do_procmap_query fs/proc/task_mmu.c:637 [inline]
>  procfs_procmap_ioctl+0x406/0xce0 fs/proc/task_mmu.c:748
>  vfs_ioctl fs/ioctl.c:51 [inline]
>  __do_sys_ioctl fs/ioctl.c:598 [inline]
>  __se_sys_ioctl+0xf9/0x170 fs/ioctl.c:584
>  do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
>  do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
>  entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7f79bc78e929
> Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007f79bd5c8038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
> RAX: ffffffffffffffda RBX: 00007f79bc9b6080 RCX: 00007f79bc78e929
> RDX: 0000200000000180 RSI: 00000000c0686611 RDI: 0000000000000006
> RBP: 00007f79bc810b39 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> R13: 0000000000000000 R14: 00007f79bc9b6080 R15: 00007ffcdd82ae18
>  </TASK>
>
>
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@...glegroups.com.
>
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
>
> If the report is already addressed, let syzbot know by replying with:
> #syz fix: exact-commit-title
>
> If you want to overwrite report's subsystems, reply with:
> #syz set subsystems: new-subsystem
> (See the list of subsystem names on the web dashboard)
>
> If the report is a duplicate of another one, reply with:
> #syz dup: exact-subject-of-another-report
>
> If you want to undo deduplication, reply with:
> #syz undup

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ