[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKMK7uEAcLizuCEBAN99oFGaN02Wn_ief5asTbzD=Dcv-b=9VQ@mail.gmail.com>
Date: Thu, 12 Nov 2020 14:56:49 +0100
From: Daniel Vetter <daniel.vetter@...ll.ch>
To: Byungchul Park <byungchul.park@....com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Joel Fernandes <joel@...lfernandes.org>,
Sasha Levin <alexander.levin@...rosoft.com>,
"Wilson, Chris" <chris@...is-wilson.co.uk>, duyuyang@...il.com,
Johannes Berg <johannes.berg@...el.com>,
Tejun Heo <tj@...nel.org>, "Theodore Ts'o" <tytso@....edu>,
Matthew Wilcox <willy@...radead.org>,
Dave Chinner <david@...morbit.com>,
Amir Goldstein <amir73il@...il.com>,
"J. Bruce Fields" <bfields@...ldses.org>,
Greg KH <gregkh@...uxfoundation.org>, kernel-team@....com
Subject: Re: [RFC] Are you good with Lockdep?
On Thu, Nov 12, 2020 at 11:33 AM Byungchul Park <byungchul.park@....com> wrote:
>
> On Wed, Nov 11, 2020 at 09:36:09AM -0500, Steven Rostedt wrote:
> > And this is especially true with lockdep, because lockdep only detects the
> > deadlock, it doesn't tell you which lock was the incorrect locking.
> >
> > For example. If we have a locking chain of:
> >
> > A -> B -> D
> >
> > A -> C -> D
> >
> > Which on a correct system looks like this:
> >
> > lock(A)
> > lock(B)
> > unlock(B)
> > unlock(A)
> >
> > lock(B)
> > lock(D)
> > unlock(D)
> > unlock(B)
> >
> > lock(A)
> > lock(C)
> > unlock(C)
> > unlock(A)
> >
> > lock(C)
> > lock(D)
> > unlock(D)
> > unlock(C)
> >
> > which creates the above chains in that order.
> >
> > But, lets say we have a bug and the system boots up doing:
> >
> > lock(D)
> > lock(A)
> > unlock(A)
> > unlock(D)
> >
> > which creates the incorrect chain.
> >
> > D -> A
> >
> >
> > Now you do the correct locking:
> >
> > lock(A)
> > lock(B)
> >
> > Creates A -> B
> >
> > lock(A)
> > lock(C)
> >
> > Creates A -> C
> >
> > lock(B)
> > lock(D)
> >
> > Creates B -> D and lockdep detects:
> >
> > D -> A -> B -> D
> >
> > and gives us the lockdep splat!!!
> >
> > But we don't disable lockdep. We let it continue...
> >
> > lock(C)
> > lock(D)
> >
> > Which creates C -> D
> >
> > Now it explodes with D -> A -> C -> D
>
> It would be better to check both so that we can choose either
> breaking a single D -> A chain or both breaking A -> B -> D and
> A -> C -> D.
>
> > Which it already reported. And it can be much more complex when dealing
> > with interrupt contexts and longer chains. That is, perhaps a different
>
> IRQ context is much much worse than longer chains. I understand what you
> try to explain.
>
> > chain had a missing irq disable, now you might get 5 or 6 more lockdep
> > splats because of that one bug.
> >
> > The point I'm making is that the lockdep splats after the first one may
> > just be another version of the same bug and not a new one. Worse, if you
> > only look at the later lockdep splats, it may be much more difficult to
> > find the original bug than if you just had the first one. Believe me, I've
>
> If the later lockdep splats make us more difficult to fix, then we can
> look at the first one. If it's more informative, then we can check the
> all splats. Anyway it's up to us.
>
> > been down that road too many times!
> >
> > And it can be very difficult to know if new lockdep splats are not the same
> > bug, and this will waste a lot of developers time!
>
> Again, we don't have to waste time. We can go with the first one.
>
> > This is why the decision to disable lockdep after the first splat was made.
> > There were times I wanted to check locking somewhere, but is was using
> > linux-next which had a lockdep splat that I didn't care about. So I
> > made it not disable lockdep. And then I hit this exact scenario, that the
> > one incorrect chain was causing reports all over the place. To solve it, I
> > had to patch the incorrect chain to do raw locking to have lockdep ignore
> > it ;-) Then I was able to test the code I was interested in.
>
> It's not a problem of whether it's single-reporting or multi-reporting
> but it's the problem of the lock creating the incorrect chain and making
> you felt hard to handle.
>
> Even if you were using single-reporting Lockdep, you anyway had to
> continue to ignore locks in the same way until you got to the intest.
>
> > I think I understand it. For things like completions and other "wait for
> > events" we have lockdep annotation, but it is rather awkward to implement.
> > Having something that says "lockdep_wait_event()" and
> > "lockdep_exec_event()" wrappers would be useful.
>
> Yes. It's a problem of lack of APIs. It can be done by reverting revert
> of cross-release without big change. ;-)
+1 on lockdep-native support for this. For another use case I've added
annotations for dma_fence_wait, and they're not entirely correct
unfortunately. But the false positives is along the lines of "you
really shouldn't do this, even if it's in theory deadlock free". See
commit 5fbff813a4a328b730cb117027c43a4ae9d8b6c0
Author: Daniel Vetter <daniel.vetter@...ll.ch>
Date: Tue Jul 7 22:12:05 2020 +0200
dma-fence: basic lockdep annotations
for fairly lengthy discussion of the problem and what I ended up with.
Thanks, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists