[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPAsAGwYXdEsNfMqr3pbU3qHVOp7wBwuykCUo_zaozXKKz7sGQ@mail.gmail.com>
Date: Thu, 21 Apr 2016 12:10:28 +0300
From: Andrey Ryabinin <ryabinin.a.a@...il.com>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Tejun Heo <tj@...nel.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
syzkaller <syzkaller@...glegroups.com>,
Kostya Serebryany <kcc@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Sasha Levin <sasha.levin@...cle.com>
Subject: Re: fs: GPF in locked_inode_to_wb_and_lock_list
2016-04-21 11:25 GMT+03:00 Dmitry Vyukov <dvyukov@...gle.com>:
> On Wed, Apr 20, 2016 at 11:14 PM, Tejun Heo <tj@...nel.org> wrote:
>> Hello, Dmitry.
>>
>> On Mon, Apr 18, 2016 at 11:44:11AM +0200, Dmitry Vyukov wrote:
>>
>>> general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN
>> ...
>>> RIP: 0010:[<ffffffff818884d2>] [<ffffffff818884d2>]
>>> locked_inode_to_wb_and_lock_list+0xa2/0x750
>>> RSP: 0018:ffff88006cdaf7d0 EFLAGS: 00010246
>>> RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff88006ccf2050
>>> RDX: 0000000000000000 RSI: 000000114c8a8484 RDI: 0000000000000286
>>> RBP: ffff88006cdaf820 R08: ffff88006ccf1840 R09: 0000000000000000
>>> R10: 000229915090805f R11: 0000000000000001 R12: ffff88006a72f5e0
>>> R13: dffffc0000000000 R14: ffffed000d4e5eed R15: ffffffff8830cf40
>>> FS: 0000000000000000(0000) GS:ffff88006d500000(0000) knlGS:0000000000000000
>>> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> CR2: 0000000003301bf8 CR3: 000000006368f000 CR4: 00000000000006e0
>>> DR0: 0000000000001ec9 DR1: 0000000000000000 DR2: 0000000000000000
>>> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600
>>> Stack:
>>> ffff88006a72f680 ffff88006a72f768 ffff8800671230d8 03ff88006cdaf948
>>> ffff88006a72f668 ffff88006a72f5e0 ffff8800671230d8 ffff88006cdaf948
>>> ffff880065b90cc8 ffff880067123100 ffff88006cdaf970 ffffffff8188e12e
>>> Call Trace:
>>> [< inline >] inode_to_wb_and_lock_list fs/fs-writeback.c:309
>>> [<ffffffff8188e12e>] writeback_sb_inodes+0x4de/0x1250 fs/fs-writeback.c:1554
>>> [<ffffffff8188efa4>] __writeback_inodes_wb+0x104/0x1e0 fs/fs-writeback.c:1600
>>> [<ffffffff8188f9ae>] wb_writeback+0x7ce/0xc90 fs/fs-writeback.c:1709
>>> [< inline >] wb_do_writeback fs/fs-writeback.c:1844
>>> [<ffffffff81891079>] wb_workfn+0x2f9/0x1000 fs/fs-writeback.c:1884
>>> [<ffffffff813bcd1e>] process_one_work+0x78e/0x15c0 kernel/workqueue.c:2094
>>> [<ffffffff813bdc2b>] worker_thread+0xdb/0xfc0 kernel/workqueue.c:2228
>>> [<ffffffff813cdeef>] kthread+0x23f/0x2d0 drivers/block/aoe/aoecmd.c:1303
>>> [<ffffffff867bc5d2>] ret_from_fork+0x22/0x50 arch/x86/entry/entry_64.S:392
>>> Code: 05 94 4a a8 06 85 c0 0f 85 03 03 00 00 e8 07 15 d0 ff 41 80 3e
>>> 00 0f 85 64 06 00 00 49 8b 9c 24 88 01 00 00 48 89 d8 48 c1 e8 03 <42>
>>> 80 3c 28 00 0f 85 17 06 00 00 48 8b 03 48 83 c0 50 48 39 c3
>>> RIP [< inline >] wb_get include/linux/backing-dev-defs.h:212
>>> RIP [<ffffffff818884d2>] locked_inode_to_wb_and_lock_list+0xa2/0x750
>>> fs/fs-writeback.c:281
>>> RSP <ffff88006cdaf7d0>
>>
>> Man, that's a beautiful trace w/ decoding of inline functions. When
>> did we start doing that? Is there a specific config option for this?
>
> Thanks! :)
> I use this script for symbolization:
> https://github.com/google/sanitizers/blob/master/address-sanitizer/tools/kasan_symbolize.py
BTW, it would be nice to have that script in kernel tree, instead of
scripts/decode_stacktrace.sh which
is far less advanced.
> It invokes addr2line to provide file:line info, adds inline frames,
> strips ? frames (are they ever useful?) and strips timestamps.
> You just need to pipe a report through it. It assumes that vmlinux is
> in the current dir, but you can override it with --linux flag.
>
> It can also insert actual source lines into a report (amount of
> context is controlled with flags), for example:
>
>
> Call Trace:
> [< inline >] khugepaged_scan_pmd include/linux/uaccess.h:9
> 8 {
> 9 current->pagefault_disabled++;
> 10 }
> [< inline >] khugepaged_scan_mm_slot mm/huge_memory.c:2716
> 2715 hend);
> 2716 ret = khugepaged_scan_pmd(mm, vma,
> 2717
> khugepaged_scan.address,
> [< inline >] khugepaged_do_scan mm/huge_memory.c:2796
> 2795 pass_through_head < 2)
> 2796 progress +=
> khugepaged_scan_mm_slot(pages - progress,
> 2797 &hpage);
> [<ffffffff81790b33>] khugepaged+0x993/0x48e0 mm/huge_memory.c:2831
> 2830 while (!kthread_should_stop()) {
> 2831 khugepaged_do_scan();
> 2832 khugepaged_wait_work();
> [<ffffffff813c195f>] kthread+0x23f/0x2d0 drivers/block/aoe/aoecmd.c:1303
> 1302 add_wait_queue(k->waitq, &wait);
> 1303 __set_current_state(TASK_INTERRUPTIBLE);
> 1304 }
> [<ffffffff866d1b2f>] ret_from_fork+0x3f/0x70 arch/x86/entry/entry_64.S:405
> 404 SWAPGS
> 405 jmp restore_regs_and_iret
> 406 END(ret_from_fork)
Powered by blists - more mailing lists