[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1562879529.4014.95.camel@linux.ibm.com>
Date: Thu, 11 Jul 2019 17:12:09 -0400
From: Mimi Zohar <zohar@...ux.ibm.com>
To: Eric Biggers <ebiggers3@...il.com>
Cc: syzbot <syzbot+5ab61747675a87ea359d@...kaller.appspotmail.com>,
dmitry.kasatkin@...il.com, jmorris@...ei.org,
linux-integrity@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-security-module@...r.kernel.org, serge@...lyn.com,
syzkaller-bugs@...glegroups.com, zohar@...ux.vnet.ibm.com
Subject: Re: possible deadlock in process_measurement
Hi Eric,
> > the existing dependency chain (in reverse order) is:
> >
> > -> #1 (&mm->mmap_sem#2){++++}:
> > down_read+0x3f/0x1e0 kernel/locking/rwsem.c:24
> > get_user_pages_unlocked+0xfc/0x4a0 mm/gup.c:1174
> > __gup_longterm_unlocked mm/gup.c:2193 [inline]
> > get_user_pages_fast+0x43f/0x530 mm/gup.c:2245
> > iov_iter_get_pages+0x2c2/0xf80 lib/iov_iter.c:1287
> > dio_refill_pages fs/direct-io.c:171 [inline]
> > dio_get_page fs/direct-io.c:215 [inline]
> > do_direct_IO fs/direct-io.c:983 [inline]
> > do_blockdev_direct_IO+0x3f7b/0x8e00 fs/direct-io.c:1336
> > __blockdev_direct_IO+0xa1/0xca fs/direct-io.c:1422
> > ext4_direct_IO_write fs/ext4/inode.c:3782 [inline]
> > ext4_direct_IO+0xaa7/0x1bb0 fs/ext4/inode.c:3909
> > generic_file_direct_write+0x20a/0x4a0 mm/filemap.c:3110
> > __generic_file_write_iter+0x2ee/0x630 mm/filemap.c:3293
> > ext4_file_write_iter+0x332/0x1070 fs/ext4/file.c:266
> > call_write_iter include/linux/fs.h:1870 [inline]
> > new_sync_write+0x4d3/0x770 fs/read_write.c:483
> > __vfs_write+0xe1/0x110 fs/read_write.c:496
> > vfs_write+0x268/0x5d0 fs/read_write.c:558
> > ksys_write+0x14f/0x290 fs/read_write.c:611
> > __do_sys_write fs/read_write.c:623 [inline]
> > __se_sys_write fs/read_write.c:620 [inline]
> > __x64_sys_write+0x73/0xb0 fs/read_write.c:620
> > do_syscall_64+0xfd/0x680 arch/x86/entry/common.c:301
> > entry_SYSCALL_64_after_hwframe+0x49/0xbe
> >
> > -> #0 (&sb->s_type->i_mutex_key#10){+.+.}:
> > lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:4300
> > down_write+0x38/0xa0 kernel/locking/rwsem.c:66
> > inode_lock include/linux/fs.h:778 [inline]
> > process_measurement+0x15ae/0x15e0
> > security/integrity/ima/ima_main.c:228
> > ima_file_mmap+0x11a/0x130 security/integrity/ima/ima_main.c:370
> > security_file_mprotect+0xd5/0x100 security/security.c:1430
> > do_mprotect_pkey+0x537/0xa30 mm/mprotect.c:550
> > __do_sys_pkey_mprotect mm/mprotect.c:590 [inline]
> > __se_sys_pkey_mprotect mm/mprotect.c:587 [inline]
> > __x64_sys_pkey_mprotect+0x97/0xf0 mm/mprotect.c:587
> > do_syscall_64+0xfd/0x680 arch/x86/entry/common.c:301
> > entry_SYSCALL_64_after_hwframe+0x49/0xbe
> >
> > other info that might help us debug this:
> >
> > Possible unsafe locking scenario:
> >
> > CPU0 CPU1
> > ---- ----
> > lock(&mm->mmap_sem#2);
> > lock(&sb->s_type->i_mutex_key#10);
> > lock(&mm->mmap_sem#2);
> > lock(&sb->s_type->i_mutex_key#10);
> >
> > *** DEADLOCK ***
The locking on CPU1 shouldn't be nested. Only after the call to
security_file_mmap() would the mmap_sem be taken.
Mimi
> >
> > 1 lock held by syz-executor395/17373:
> > #0: 00000000e0714fc5 (&mm->mmap_sem#2){++++}, at:
> > do_mprotect_pkey+0x1f6/0xa30 mm/mprotect.c:485
> >
> > stack backtrace:
> > CPU: 1 PID: 17373 Comm: syz-executor395 Not tainted 5.2.0-rc2-next-20190531
> > #4
> > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
> > Google 01/01/2011
> > Call Trace:
> > __dump_stack lib/dump_stack.c:77 [inline]
> > dump_stack+0x172/0x1f0 lib/dump_stack.c:113
> > print_circular_bug.cold+0x1cc/0x28f kernel/locking/lockdep.c:1566
> > check_prev_add kernel/locking/lockdep.c:2311 [inline]
> > check_prevs_add kernel/locking/lockdep.c:2419 [inline]
> > validate_chain kernel/locking/lockdep.c:2801 [inline]
> > __lock_acquire+0x3755/0x5490 kernel/locking/lockdep.c:3790
> > lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:4300
> > down_write+0x38/0xa0 kernel/locking/rwsem.c:66
> > inode_lock include/linux/fs.h:778 [inline]
> > process_measurement+0x15ae/0x15e0 security/integrity/ima/ima_main.c:228
> > ima_file_mmap+0x11a/0x130 security/integrity/ima/ima_main.c:370
> > security_file_mprotect+0xd5/0x100 security/security.c:1430
> > do_mprotect_pkey+0x537/0xa30 mm/mprotect.c:550
> > __do_sys_pkey_mprotect mm/mprotect.c:590 [inline]
> > __se_sys_pkey_mprotect mm/mprotect.c:587 [inline]
> > __x64_sys_pkey_mprotect+0x97/0xf0 mm/mprotect.c:587
> > do_syscall_64+0xfd/0x680 arch/x86/entry/common.c:301
> > entry_SYSCALL_64_after_hwframe+0x49/0xbe
> > RIP: 0033:0x440279
> > Code: 18 89 d0 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 48 89 f8 48 89 f7
> > 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff
> > ff 0f 83 fb 13 fc ff c3 66 2e 0f 1f 84 00 00 00 00
> > RSP: 002b:00007ffeec2f48d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000149
> > RAX: ffffffffffffffda RBX: 00000000004002c8 RCX: 0000000000440279
> > RDX: 000000000000000
> >
>
Powered by blists - more mailing lists