[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0000000000007f48880575b3478d@google.com>
Date: Wed, 12 Sep 2018 14:29:02 -0700
From: syzbot <syzbot+fef0b74a3bf760a9ee85@...kaller.appspotmail.com>
To: arnd@...db.de, dbueso@...e.de, dhowells@...hat.com,
ebiederm@...ssion.com, jhaws@....usu.edu,
linux-kernel@...r.kernel.org, syzkaller-bugs@...glegroups.com,
viro@...iv.linux.org.uk
Subject: Re: KASAN: use-after-free Read in mqueue_get_tree
syzbot has found a reproducer for the following crash on:
HEAD commit: 7c1b097f27bf Add linux-next specific files for 20180912
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=117a33be400000
kernel config: https://syzkaller.appspot.com/x/.config?x=5980033172920ec0
dashboard link: https://syzkaller.appspot.com/bug?extid=fef0b74a3bf760a9ee85
compiler: gcc (GCC) 8.0.1 20180413 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=15904f71400000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=14b37211400000
IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+fef0b74a3bf760a9ee85@...kaller.appspotmail.com
sshd (5546) used greatest stack depth: 16872 bytes left
random: sshd: uninitialized urandom read (32 bytes read)
random: sshd: uninitialized urandom read (32 bytes read)
random: sshd: uninitialized urandom read (32 bytes read)
==================================================================
BUG: KASAN: use-after-free in mqueue_get_tree+0x2ac/0x2e0 ipc/mqueue.c:362
Read of size 8 at addr ffff8801d88ce9c8 by task syz-executor123/5567
CPU: 1 PID: 5567 Comm: syz-executor123 Not tainted
4.19.0-rc3-next-20180912+ #72
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x1d3/0x2c4 lib/dump_stack.c:113
print_address_description.cold.8+0x9/0x1ff mm/kasan/report.c:256
kasan_report_error mm/kasan/report.c:354 [inline]
kasan_report.cold.9+0x242/0x309 mm/kasan/report.c:412
__asan_report_load8_noabort+0x14/0x20 mm/kasan/report.c:433
mqueue_get_tree+0x2ac/0x2e0 ipc/mqueue.c:362
vfs_get_tree+0x1cb/0x5c0 fs/super.c:1787
mq_create_mount+0xe3/0x190 ipc/mqueue.c:415
mq_init_ns+0x15a/0x210 ipc/mqueue.c:1621
create_ipc_ns ipc/namespace.c:58 [inline]
copy_ipcs+0x3d2/0x580 ipc/namespace.c:84
create_new_namespaces+0x376/0x900 kernel/nsproxy.c:87
unshare_nsproxy_namespaces+0xc3/0x1f0 kernel/nsproxy.c:206
ksys_unshare+0x79c/0x10b0 kernel/fork.c:2535
__do_sys_unshare kernel/fork.c:2603 [inline]
__se_sys_unshare kernel/fork.c:2601 [inline]
__x64_sys_unshare+0x31/0x40 kernel/fork.c:2601
do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x44e547
Code: 00 00 00 b8 63 00 00 00 0f 05 48 3d 01 f0 ff ff 0f 83 3d a0 fb ff c3
66 2e 0f 1f 84 00 00 00 00 00 66 90 b8 10 01 00 00 0f 05 <48> 3d 01 f0 ff
ff 0f 83 1d a0 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007ffc80298538 EFLAGS: 00000217 ORIG_RAX: 0000000000000110
RAX: ffffffffffffffda RBX: 00007ffc80298bc0 RCX: 000000000044e547
RDX: 0000000000000000 RSI: 00007ffc80298540 RDI: 0000000008000000
RBP: 585858582e72656c R08: 0000000000000000 R09: 0000000000000018
R10: 0000000000000000 R11: 0000000000000217 R12: 6c616b7a79732f2e
R13: 0000000000408a80 R14: 0000000000000000 R15: 0000000000000000
The buggy address belongs to the page:
page:ffffea0007623380 count:0 mapcount:0 mapping:0000000000000000
index:0xffff8801d88ced00
flags: 0x2fffc0000000000()
raw: 02fffc0000000000 dead000000000100 dead000000000200 0000000000000000
raw: ffff8801d88ced00 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
Memory state around the buggy address:
ffff8801d88ce880: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ffff8801d88ce900: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> ffff8801d88ce980: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
^
ffff8801d88cea00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ffff8801d88cea80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
==================================================================
Powered by blists - more mailing lists