[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e71c0df1-d83c-030c-7c97-13a923aca1b3@virtuozzo.com>
Date: Tue, 5 Jun 2018 16:55:20 +0300
From: Kirill Tkhai <ktkhai@...tuozzo.com>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: syzbot <syzbot+bf78a74f82c1cf19069e@...kaller.appspotmail.com>,
Christian Brauner <christian.brauner@...ntu.com>,
David Miller <davem@...emloft.net>,
David Ahern <dsahern@...il.com>,
Florian Westphal <fw@...len.de>, Jiri Benc <jbenc@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Xin Long <lucien.xin@...il.com>,
mschiffer@...verse-factory.net, netdev <netdev@...r.kernel.org>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
Vladislav Yasevich <vyasevich@...il.com>
Subject: Re: INFO: task hung in ip6gre_exit_batch_net
On 05.06.2018 12:36, Dmitry Vyukov wrote:
> On Tue, Jun 5, 2018 at 11:03 AM, Kirill Tkhai <ktkhai@...tuozzo.com> wrote:
>> Hi, Dmirty!
>>
>> On 04.06.2018 18:22, Dmitry Vyukov wrote:
>>> On Mon, Jun 4, 2018 at 5:03 PM, syzbot
>>> <syzbot+bf78a74f82c1cf19069e@...kaller.appspotmail.com> wrote:
>>>> Hello,
>>>>
>>>> syzbot found the following crash on:
>>>>
>>>> HEAD commit: bc2dbc5420e8 Merge branch 'akpm' (patches from Andrew)
>>>> git tree: upstream
>>>> console output: https://syzkaller.appspot.com/x/log.txt?x=164e42b7800000
>>>> kernel config: https://syzkaller.appspot.com/x/.config?x=982e2df1b9e60b02
>>>> dashboard link: https://syzkaller.appspot.com/bug?extid=bf78a74f82c1cf19069e
>>>> compiler: gcc (GCC) 8.0.1 20180413 (experimental)
>>>>
>>>> Unfortunately, I don't have any reproducer for this crash yet.
>>>>
>>>> IMPORTANT: if you fix the bug, please add the following tag to the commit:
>>>> Reported-by: syzbot+bf78a74f82c1cf19069e@...kaller.appspotmail.com
>>>
>>> Another hang on rtnl lock:
>>>
>>> #syz dup: INFO: task hung in netdev_run_todo
>>>
>>> May be related to "unregister_netdevice: waiting for DEV to become free":
>>> https://syzkaller.appspot.com/bug?id=1a97a5bd119fd97995f752819fd87840ab9479a9
>
> netdev_wait_allrefs does not hold rtnl lock during waiting, so it must
> be something different.
>
>
>>> Any other explanations for massive hangs on rtnl lock for minutes?
>>
>> To exclude the situation, when a task exists with rtnl_mutex held:
>>
>> would the pr_warn() from print_held_locks_bug() be included in the console output
>> if they appear?
>
> Yes, everything containing "WARNING:" is detected as bug.
OK, then dead task not releasing the lock is excluded.
One more assumption: someone corrupted memory around rtnl_mutex and it looks like locked.
(I track lockdep "(rtnl_mutex){+.+.}" prints in initial message as "nobody owns rtnl_mutex").
There may help a crash dump of the VM.
Also, there may be a locking code BUG, but this seems the least probable for me.
Kirill
Powered by blists - more mailing lists