lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6beeea79-92eb-462c-c5a3-009a38e1ae4f@virtuozzo.com>
Date:   Fri, 8 Jun 2018 11:43:39 +0300
From:   Kirill Tkhai <ktkhai@...tuozzo.com>
To:     Dmitry Vyukov <dvyukov@...gle.com>
Cc:     syzbot <syzbot+bf78a74f82c1cf19069e@...kaller.appspotmail.com>,
        Christian Brauner <christian.brauner@...ntu.com>,
        David Miller <davem@...emloft.net>,
        David Ahern <dsahern@...il.com>,
        Florian Westphal <fw@...len.de>, Jiri Benc <jbenc@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Xin Long <lucien.xin@...il.com>,
        mschiffer@...verse-factory.net, netdev <netdev@...r.kernel.org>,
        syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
        Vladislav Yasevich <vyasevich@...il.com>
Subject: Re: INFO: task hung in ip6gre_exit_batch_net

On 08.06.2018 11:38, Dmitry Vyukov wrote:
> On Fri, Jun 8, 2018 at 10:31 AM, Kirill Tkhai <ktkhai@...tuozzo.com> wrote:
>>>>>>>>>> Hi, Dmirty!
>>>>>>>>>>
>>>>>>>>>> On 04.06.2018 18:22, Dmitry Vyukov wrote:
>>>>>>>>>>> On Mon, Jun 4, 2018 at 5:03 PM, syzbot
>>>>>>>>>>> <syzbot+bf78a74f82c1cf19069e@...kaller.appspotmail.com> wrote:
>>>>>>>>>>>> Hello,
>>>>>>>>>>>>
>>>>>>>>>>>> syzbot found the following crash on:
>>>>>>>>>>>>
>>>>>>>>>>>> HEAD commit:    bc2dbc5420e8 Merge branch 'akpm' (patches from Andrew)
>>>>>>>>>>>> git tree:       upstream
>>>>>>>>>>>> console output: https://syzkaller.appspot.com/x/log.txt?x=164e42b7800000
>>>>>>>>>>>> kernel config:  https://syzkaller.appspot.com/x/.config?x=982e2df1b9e60b02
>>>>>>>>>>>> dashboard link: https://syzkaller.appspot.com/bug?extid=bf78a74f82c1cf19069e
>>>>>>>>>>>> compiler:       gcc (GCC) 8.0.1 20180413 (experimental)
>>>>>>>>>>>>
>>>>>>>>>>>> Unfortunately, I don't have any reproducer for this crash yet.
>>>>>>>>>>>>
>>>>>>>>>>>> IMPORTANT: if you fix the bug, please add the following tag to the commit:
>>>>>>>>>>>> Reported-by: syzbot+bf78a74f82c1cf19069e@...kaller.appspotmail.com
>>>>>>>>>>>
>>>>>>>>>>> Another hang on rtnl lock:
>>>>>>>>>>>
>>>>>>>>>>> #syz dup: INFO: task hung in netdev_run_todo
>>>>>>>>>>>
>>>>>>>>>>> May be related to "unregister_netdevice: waiting for DEV to become free":
>>>>>>>>>>> https://syzkaller.appspot.com/bug?id=1a97a5bd119fd97995f752819fd87840ab9479a9
>>>>>>>>>
>>>>>>>>> netdev_wait_allrefs does not hold rtnl lock during waiting, so it must
>>>>>>>>> be something different.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>> Any other explanations for massive hangs on rtnl lock for minutes?
>>>>>>>>>>
>>>>>>>>>> To exclude the situation, when a task exists with rtnl_mutex held:
>>>>>>>>>>
>>>>>>>>>> would the pr_warn() from print_held_locks_bug() be included in the console output
>>>>>>>>>> if they appear?
>>>>>>>>>
>>>>>>>>> Yes, everything containing "WARNING:" is detected as bug.
>>>>>>>>
>>>>>>>> OK, then dead task not releasing the lock is excluded.
>>>>>>>>
>>>>>>>> One more assumption: someone corrupted memory around rtnl_mutex and it looks like locked.
>>>>>>>> (I track lockdep "(rtnl_mutex){+.+.}" prints in initial message as "nobody owns rtnl_mutex").
>>>>>>>> There may help a crash dump of the VM.
>>>>>>>
>>>>>>> I can't find any legend for these +'s and .'s, but {+.+.} is present
>>>>>>> in large amounts in just any task hung report for different mutexes,
>>>>>>> so I would not expect that it means corruption.
>>>>>>>
>>>>>>> Are dozens of known corruptions that syzkaller can trigger. But
>>>>>>> usually they are reliably caught by KASAN. If any of them would lead
>>>>>>> to silent memory corruption, we would got dozens of assorted crashes
>>>>>>> throughout the kernel. We've seen that at some points, but not
>>>>>>> recently. So I would assume that memory is not corrupted in all these
>>>>>>> cases:
>>>>>>> https://syzkaller.appspot.com/bug?id=2503c576cabb08d41812e732b390141f01a59545
>>>>>>
>>>>>> This BUG clarifies the {+.+.}:
>>>>>>
>>>>>> 4 locks held by kworker/0:145/381:
>>>>>>  #0:  ((wq_completion)"hwsim_wq"){+.+.}, at: [<000000003f9487f0>] work_static include/linux/workqueue.h:198 [inline]
>>>>>>  #0:  ((wq_completion)"hwsim_wq"){+.+.}, at: [<000000003f9487f0>] set_work_data kernel/workqueue.c:619 [inline]
>>>>>>  #0:  ((wq_completion)"hwsim_wq"){+.+.}, at: [<000000003f9487f0>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
>>>>>>  #0:  ((wq_completion)"hwsim_wq"){+.+.}, at: [<000000003f9487f0>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084
>>>>>>  #1:  ((work_completion)(&data->destroy_work)){+.+.}, at: [<00000000bbdd2115>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088
>>>>>>  #2:  (rtnl_mutex){+.+.}, at: [<000000009c9d14f8>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:74
>>>>>>  #3:  (rcu_sched_state.exp_mutex){+.+.}, at: [<000000001ba1a807>] exp_funnel_lock kernel/rcu/tree_exp.h:272 [inline]
>>>>>>  #3:  (rcu_sched_state.exp_mutex){+.+.}, at: [<000000001ba1a807>] _synchronize_rcu_expedited.constprop.72+0x9fa/0xac0 kernel/rcu/tree_exp.h:596
>>>>>>
>>>>>> There we have rtnl_mutex locked and the {..} is like above. It's definitely locked
>>>>>> since there is one more lock after it.
>>>>>>
>>>>>> This BUG happen because of there are many rtnl_mutex waiters while owner
>>>>>> is synchronizing RCU. Rather clear for me in comparison to the topic's hung.
>>>>>
>>>>>
>>>>> You mean that it's not hanged, but rather needs more than 2 minutes to
>>>>> complete, right?
>>>>
>>>> Yeah, I think, this is possible. I've seen the situations like that.
>>>> Let synchronize_rcu_expedited() is executed for X seconds. Then,
>>>> it's need just 120/x calls of "this function under rtnl_mutex" to make
>>>> a soft lockup of someone else who wants the mutex too.
>>>>
>>>> Also, despite the CFS is fair scheduler, in case of the calls are
>>>> made from workqueue, every work will cause sleep. So, every work
>>>> will be executed in separate worker task. Every worker will haved its
>>>> own time slice. This increases the probability these tasks will
>>>> take cpu time before the task in the header of the hang.
>>>
>>>
>>> OK, let's stick with this theory for now. Looking at the crash frequencies here:
>>> https://syzkaller.appspot.com/bug?id=2503c576cabb08d41812e732b390141f01a59545
>>> I can actually believe that this is just flakes due to too slow execution.
>>>
>>> I've noted that we need either reduce load and/or increase timeouts:
>>> https://github.com/google/syzkaller/issues/516#issuecomment-395685629
>>
>> Hm, I'm not sure we should hide such the situations from syzbot,
>> because the load like this may occur in real life on real workload.
>> They may help us to understand whether rtnl_mutex already needs
>> a redesign came from this statistics. Also, these hungs may happen
>> in a place, which can be rewritten without rtnl_mutex, so we focus
>> attention on it.
> 
> If somebody wants to act on these reports:
> https://syzkaller.appspot.com/bug?id=2503c576cabb08d41812e732b390141f01a59545
> it's even better. The point is that testing must not have false
> positives, one way or another. If we do nothing then syzbot will
> slowly discover all 250 usages of rtnl_lock and produce unique bugs
> for them. Each and every of these bug reports will need to handled by
> somebody.
> 
> Does somebody want to act on these and improve rtnl performance in
> foreseeable future?

I just analyzed this question a little bit, and it looks like only
the preparations for improving the performance will take much much time.
And the performance won't change till preparations are finished.
So, this looks like "not a foreseeable future".

Kirill

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ