[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200114084334.GI2827@hirez.programming.kicks-ass.net>
Date: Tue, 14 Jan 2020 09:43:34 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: syzbot <syzbot+aaa6fa4949cc5d9b7b25@...kaller.appspotmail.com>,
Ingo Molnar <mingo@...nel.org>,
Will Deacon <will.deacon@....com>,
LKML <linux-kernel@...r.kernel.org>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: BUG: MAX_LOCKDEP_CHAINS too low!
On Thu, Jan 09, 2020 at 11:59:25AM +0100, Dmitry Vyukov wrote:
> Or are there some ID leaks in lockdep? syzbot has a bunch of very
> simple reproducers for these bugs, so not really a maximally diverse
> load. And I think I saw these bugs massively when testing just a
> single subsystem too, e.g. netfilter.
Can you share me one of the simple ones? A .c files I can run on my
regular test box that should make it go *splat* ?
Often in the past hitting these limits was the result of some
particularly poor annotation.
For instance, locks in per-cpu data used to trigger this, since
static locks don't need explicit {mutex,spin_lock}_init() calls and
instead use their (static) address. This worked fine for global state,
but per-cpu is an exception, there it causes a nr_cpus explosion in
lockdep state because you get nr_cpus different addresses.
Now, we fixed that particular issue:
383776fa7527 ("locking/lockdep: Handle statically initialized PER_CPU locks properly")
but maybe there's something else going on.
Just blindly bumping the number without analysis of what exactly is
happening is never a good idea.
Powered by blists - more mailing lists