[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+YBY2yGJBHgqGhAcOguag9JUkGkeOz1acgKd=KR3q+noQ@mail.gmail.com>
Date: Thu, 7 Jun 2018 20:23:13 +0200
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Kirill Tkhai <ktkhai@...tuozzo.com>
Cc: syzbot <syzbot+bf78a74f82c1cf19069e@...kaller.appspotmail.com>,
Christian Brauner <christian.brauner@...ntu.com>,
David Miller <davem@...emloft.net>,
David Ahern <dsahern@...il.com>,
Florian Westphal <fw@...len.de>, Jiri Benc <jbenc@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Xin Long <lucien.xin@...il.com>,
mschiffer@...verse-factory.net, netdev <netdev@...r.kernel.org>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
Vladislav Yasevich <vyasevich@...il.com>
Subject: Re: INFO: task hung in ip6gre_exit_batch_net
On Tue, Jun 5, 2018 at 3:55 PM, Kirill Tkhai <ktkhai@...tuozzo.com> wrote:
> On 05.06.2018 12:36, Dmitry Vyukov wrote:
>> On Tue, Jun 5, 2018 at 11:03 AM, Kirill Tkhai <ktkhai@...tuozzo.com> wrote:
>>> Hi, Dmirty!
>>>
>>> On 04.06.2018 18:22, Dmitry Vyukov wrote:
>>>> On Mon, Jun 4, 2018 at 5:03 PM, syzbot
>>>> <syzbot+bf78a74f82c1cf19069e@...kaller.appspotmail.com> wrote:
>>>>> Hello,
>>>>>
>>>>> syzbot found the following crash on:
>>>>>
>>>>> HEAD commit: bc2dbc5420e8 Merge branch 'akpm' (patches from Andrew)
>>>>> git tree: upstream
>>>>> console output: https://syzkaller.appspot.com/x/log.txt?x=164e42b7800000
>>>>> kernel config: https://syzkaller.appspot.com/x/.config?x=982e2df1b9e60b02
>>>>> dashboard link: https://syzkaller.appspot.com/bug?extid=bf78a74f82c1cf19069e
>>>>> compiler: gcc (GCC) 8.0.1 20180413 (experimental)
>>>>>
>>>>> Unfortunately, I don't have any reproducer for this crash yet.
>>>>>
>>>>> IMPORTANT: if you fix the bug, please add the following tag to the commit:
>>>>> Reported-by: syzbot+bf78a74f82c1cf19069e@...kaller.appspotmail.com
>>>>
>>>> Another hang on rtnl lock:
>>>>
>>>> #syz dup: INFO: task hung in netdev_run_todo
>>>>
>>>> May be related to "unregister_netdevice: waiting for DEV to become free":
>>>> https://syzkaller.appspot.com/bug?id=1a97a5bd119fd97995f752819fd87840ab9479a9
>>
>> netdev_wait_allrefs does not hold rtnl lock during waiting, so it must
>> be something different.
>>
>>
>>>> Any other explanations for massive hangs on rtnl lock for minutes?
>>>
>>> To exclude the situation, when a task exists with rtnl_mutex held:
>>>
>>> would the pr_warn() from print_held_locks_bug() be included in the console output
>>> if they appear?
>>
>> Yes, everything containing "WARNING:" is detected as bug.
>
> OK, then dead task not releasing the lock is excluded.
>
> One more assumption: someone corrupted memory around rtnl_mutex and it looks like locked.
> (I track lockdep "(rtnl_mutex){+.+.}" prints in initial message as "nobody owns rtnl_mutex").
> There may help a crash dump of the VM.
I can't find any legend for these +'s and .'s, but {+.+.} is present
in large amounts in just any task hung report for different mutexes,
so I would not expect that it means corruption.
Are dozens of known corruptions that syzkaller can trigger. But
usually they are reliably caught by KASAN. If any of them would lead
to silent memory corruption, we would got dozens of assorted crashes
throughout the kernel. We've seen that at some points, but not
recently. So I would assume that memory is not corrupted in all these
cases:
https://syzkaller.appspot.com/bug?id=2503c576cabb08d41812e732b390141f01a59545
I wonder if it can be just that slow, but not actually hanged... net
namespace destruction is super slow, so perhaps under heavy load it
all stalls for minutes...
> Also, there may be a locking code BUG, but this seems the least probable for me.
>
> Kirill
Powered by blists - more mailing lists