[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM_iQpVUULd=0ze4GLq6fFre5rZw5EF4oXkRYMfAdNrZOkg9AA@mail.gmail.com>
Date: Fri, 10 Nov 2017 10:49:03 -0800
From: Cong Wang <xiyou.wangcong@...il.com>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: syzbot
<bot+bdfa5a20d5d091fffa3d4e8d37ec24962970ebd0@...kaller.appspotmail.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
syzkaller-bugs@...glegroups.com, Tejun Heo <tj@...nel.org>,
David Miller <davem@...emloft.net>, tom@...ntonium.net,
Eric Biggers <ebiggers@...gle.com>,
Ingo Molnar <mingo@...nel.org>,
Tobias Klauser <tklauser@...tanz.ch>,
netdev <netdev@...r.kernel.org>
Subject: Re: KASAN: use-after-free Read in worker_thread (2)
On Wed, Nov 8, 2017 at 5:00 AM, Dmitry Vyukov <dvyukov@...gle.com> wrote:
> On Wed, Nov 8, 2017 at 1:58 PM, syzbot
> <bot+bdfa5a20d5d091fffa3d4e8d37ec24962970ebd0@...kaller.appspotmail.com>
> wrote:
>> Hello,
>>
>> syzkaller hit the following crash on
>> 7dfaa7bc99498da1c6c4a48bee8d2d5265161a8c
>> git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/master
>> compiler: gcc (GCC) 7.1.1 20170620
>> .config is attached
>> Raw console output is attached.
>>
>> Unfortunately, I don't have any reproducer for this bug yet.
>>
>
>
> I guess this is more about kcmsock.c rather than workqueue.c. +kcm maintainers.
Looks like the work is not cancelled before being freed on this path.
Do you have a C reproducer for me to try?
>
>
>> ==================================================================
>> BUG: KASAN: use-after-free in worker_thread+0x15bb/0x1990
>> kernel/workqueue.c:2245
>> Read of size 8 at addr ffff8801c3a74110 by task kworker/u4:6/3515
>>
>> CPU: 1 PID: 3515 Comm: kworker/u4:6 Not tainted 4.14.0-rc7+ #112
>> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
>> Google 01/01/2011
>> Call Trace:
>> __dump_stack lib/dump_stack.c:17 [inline]
>> dump_stack+0x194/0x257 lib/dump_stack.c:53
>> print_address_description+0x73/0x250 mm/kasan/report.c:252
>> kasan_report_error mm/kasan/report.c:351 [inline]
>> kasan_report+0x25b/0x340 mm/kasan/report.c:409
>> __asan_report_load8_noabort+0x14/0x20 mm/kasan/report.c:430
>> worker_thread+0x15bb/0x1990 kernel/workqueue.c:2245
>> kthread+0x35e/0x430 kernel/kthread.c:231
>> ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:432
>>
>> Allocated by task 31482:
>> save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:59
>> save_stack+0x43/0xd0 mm/kasan/kasan.c:447
>> set_track mm/kasan/kasan.c:459 [inline]
>> kasan_kmalloc+0xad/0xe0 mm/kasan/kasan.c:551
>> kasan_slab_alloc+0x12/0x20 mm/kasan/kasan.c:489
>> kmem_cache_alloc+0x12e/0x760 mm/slab.c:3562
>> kmem_cache_zalloc include/linux/slab.h:657 [inline]
>> kcm_attach net/kcm/kcmsock.c:1394 [inline]
>> kcm_attach_ioctl net/kcm/kcmsock.c:1460 [inline]
>> kcm_ioctl+0x2d1/0x1610 net/kcm/kcmsock.c:1695
>> sock_do_ioctl+0x65/0xb0 net/socket.c:961
>> sock_ioctl+0x2c2/0x440 net/socket.c:1058
>> vfs_ioctl fs/ioctl.c:46 [inline]
>> do_vfs_ioctl+0x1b1/0x1520 fs/ioctl.c:686
>> SYSC_ioctl fs/ioctl.c:701 [inline]
>> SyS_ioctl+0x8f/0xc0 fs/ioctl.c:692
>> entry_SYSCALL_64_fastpath+0x1f/0xbe
>>
>> Freed by task 1249:
>> save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:59
>> save_stack+0x43/0xd0 mm/kasan/kasan.c:447
>> set_track mm/kasan/kasan.c:459 [inline]
>> kasan_slab_free+0x71/0xc0 mm/kasan/kasan.c:524
>> __cache_free mm/slab.c:3504 [inline]
>> kmem_cache_free+0x77/0x280 mm/slab.c:3764
>> unreserve_psock+0x5a1/0x780 net/kcm/kcmsock.c:547
>> kcm_write_msgs+0xbae/0x1b80 net/kcm/kcmsock.c:590
>> kcm_tx_work+0x2e/0x190 net/kcm/kcmsock.c:731
>> process_one_work+0xbf0/0x1bc0 kernel/workqueue.c:2113
>> worker_thread+0x223/0x1990 kernel/workqueue.c:2247
>> kthread+0x35e/0x430 kernel/kthread.c:231
>> ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:432
>>
>> The buggy address belongs to the object at ffff8801c3a74040
>> which belongs to the cache kcm_psock_cache of size 552
>> The buggy address is located 208 bytes inside of
>> 552-byte region [ffff8801c3a74040, ffff8801c3a74268)
>> The buggy address belongs to the page:
>> page:ffffea00070e9d00 count:1 mapcount:0 mapping:ffff8801c3a74040 index:0x0
>> compound_mapcount: 0
>> flags: 0x2fffc0000008100(slab|head)
>> raw: 02fffc0000008100 ffff8801c3a74040 0000000000000000 000000010000000b
>> raw: ffffea00067920a0 ffff8801d3f39948 ffff8801d3f2a840 0000000000000000
>> page dumped because: kasan: bad access detected
>>
>> Memory state around the buggy address:
>> ffff8801c3a74000: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
>> ffff8801c3a74080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>>>
>>> ffff8801c3a74100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>>
>> ^
>> ffff8801c3a74180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>> ffff8801c3a74200: fb fb fb fb fb fb fb fb fb fb fb fb fb fc fc fc
>> ==================================================================
>>
>>
>> ---
>> This bug is generated by a dumb bot. It may contain errors.
>> See https://goo.gl/tpsmEJ for details.
>> Direct all questions to syzkaller@...glegroups.com.
>> Please credit me with: Reported-by: syzbot <syzkaller@...glegroups.com>
>>
>> syzbot will keep track of this bug report.
>> Once a fix for this bug is committed, please reply to this email with:
>> #syz fix: exact-commit-title
>> To mark this as a duplicate of another syzbot report, please reply with:
>> #syz dup: exact-subject-of-another-report
>> If it's a one-off invalid bug report, please reply with:
>> #syz invalid
>> Note: if the crash happens again, it will cause creation of a new bug
>> report.
>> Note: all commands must start from beginning of the line.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "syzkaller-bugs" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to syzkaller-bugs+unsubscribe@...glegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/syzkaller-bugs/001a114a7bc08e95e7055d783ea5%40google.com.
>> For more options, visit https://groups.google.com/d/optout.
Powered by blists - more mailing lists