[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+ZBYYUiJejNbPcZWS+aHehvkgKkTKm0gvuviXGGcirJ5g@mail.gmail.com>
Date: Thu, 9 Jan 2020 11:59:25 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: syzbot <syzbot+aaa6fa4949cc5d9b7b25@...kaller.appspotmail.com>,
Ingo Molnar <mingo@...nel.org>,
Will Deacon <will.deacon@....com>,
LKML <linux-kernel@...r.kernel.org>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: BUG: MAX_LOCKDEP_CHAINS too low!
On Fri, Sep 28, 2018 at 9:56 AM Dmitry Vyukov <dvyukov@...gle.com> wrote:
> >> > Hello,
> >> >
> >> > syzbot found the following crash on:
> >> >
> >> > HEAD commit: c307aaf3eb47 Merge tag 'iommu-fixes-v4.19-rc5' of git://gi..
> >> > git tree: upstream
> >> > console output: https://syzkaller.appspot.com/x/log.txt?x=13810df1400000
> >> > kernel config: https://syzkaller.appspot.com/x/.config?x=dfb440e26f0a6f6f
> >> > dashboard link: https://syzkaller.appspot.com/bug?extid=aaa6fa4949cc5d9b7b25
> >> > compiler: gcc (GCC) 8.0.1 20180413 (experimental)
> >> >
> >> > Unfortunately, I don't have any reproducer for this crash yet.
> >> >
> >> > IMPORTANT: if you fix the bug, please add the following tag to the commit:
> >> > Reported-by: syzbot+aaa6fa4949cc5d9b7b25@...kaller.appspotmail.com
> >>
> >> +LOCKDEP maintainers,
> >>
> >> What does this BUG mean? And how should it be fixed?
> >>
> >> Thanks
> >>
> >> > BUG: MAX_LOCKDEP_CHAINS too low!
> >
> > Is the his result of endlessly loading and unloading modules?
> >
> > In which case, the fix is: don't do that then.
>
> No modules involved, we don't have any modules in the image. Must be
> something else.
> Perhaps syzkaller just produced a workload so diverse that nobody ever produced.
Peter, Ingo,
This really plagues syzbot testing for more than a year now. These four:
BUG: MAX_LOCKDEP_KEYS too low!
https://syzkaller.appspot.com/bug?id=8a18efe79140782a88dcd098808d6ab20ed740cc
BUG: MAX_LOCKDEP_ENTRIES too low!
https://syzkaller.appspot.com/bug?id=3d97ba93fb3566000c1c59691ea427370d33ea1b
BUG: MAX_LOCKDEP_CHAINS too low!
https://syzkaller.appspot.com/bug?id=bf037f4725d40a8d350b2b1b3b3e0947c6efae85
BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!
https://syzkaller.appspot.com/bug?id=381cb436fe60dc03d7fd2a092b46d7f09542a72a
Now running testing I only see a stream of different lockdep bugs mostly:
2020/01/09 11:41:51 vm-13: crash: BUG: MAX_LOCKDEP_ENTRIES too low!
2020/01/09 11:43:09 vm-9: crash: INFO: task hung in register_netdevice_notifier
2020/01/09 11:44:00 vm-26: crash: no output from test machine
2020/01/09 11:44:11 vm-8: crash: BUG: MAX_LOCKDEP_ENTRIES too low!
2020/01/09 11:44:28 vm-19: crash: BUG: MAX_LOCKDEP_ENTRIES too low!
2020/01/09 11:46:20 vm-27: crash: BUG: MAX_LOCKDEP_ENTRIES too low!
2020/01/09 11:46:41 vm-15: crash: BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!
2020/01/09 11:46:45 vm-28: crash: BUG: MAX_LOCKDEP_ENTRIES too low!
2020/01/09 11:46:47 vm-29: crash: BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!
2020/01/09 11:46:49 vm-22: crash: BUG: MAX_LOCKDEP_ENTRIES too low!
2020/01/09 11:46:50 vm-10: crash: no output from test machine
2020/01/09 11:46:52 vm-18: crash: BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!
2020/01/09 11:46:53 vm-23: crash: BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!
2020/01/09 11:47:17 vm-20: crash: lost connection to test machine
2020/01/09 11:47:48 vm-5: crash: BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!
2020/01/09 11:47:56 vm-14: crash: WARNING in restore_regulatory_settings
2020/01/09 11:48:19 vm-2: crash: BUG: MAX_LOCKDEP_ENTRIES too low!
2020/01/09 11:48:21 vm-7: crash: BUG: MAX_LOCKDEP_ENTRIES too low!
2020/01/09 11:48:22 vm-3: crash: BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!
2020/01/09 11:48:40 vm-25: crash: BUG: MAX_LOCKDEP_CHAINS too low!
Should we just bump the limits there?
Or are there some ID leaks in lockdep? syzbot has a bunch of very
simple reproducers for these bugs, so not really a maximally diverse
load. And I think I saw these bugs massively when testing just a
single subsystem too, e.g. netfilter.
Powered by blists - more mailing lists