[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+Zt+fjBwJk-TcsccohBgxRNs37Hb4m6ZkZGy7u5P2+aaA@mail.gmail.com>
Date: Tue, 26 Mar 2019 11:32:48 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: "Theodore Ts'o" <tytso@....edu>,
Dmitry Vyukov <dvyukov@...gle.com>,
syzbot <syzbot+5cd33f0e6abe2bb3e397@...kaller.appspotmail.com>,
Andreas Dilger <adilger.kernel@...ger.ca>,
linux-ext4@...r.kernel.org,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Al Viro <viro@...iv.linux.org.uk>
Subject: Re: possible deadlock in __generic_file_fsync
On Sat, Mar 23, 2019 at 2:56 PM Theodore Ts'o <tytso@....edu> wrote:
>
> On Sat, Mar 23, 2019 at 08:16:36AM +0100, Dmitry Vyukov wrote:
> >
> > This is a lockdep-detected bug, but it is reproduced with very low
> > probability...
> >
> > I would expect that for lockdep it's only enough to trigger each path
> > involved in the deadlock once. Why is it so hard to reproduce then? Is
> > it something to improve in lockdep?
>
> It's a false positive report. The problem is that without doing some
> fairly deep code analysis --- the kind that a human needs to do; this
> is not the kind of thing that ML and adjusting weights in a nueral net
> can deal with --- a computer can't determine what the completion
> handler will need to do.
>
> The root cause here is that we have N CPU's that are trying to do
> direct I/O's, and on the very first DIO write for a fd, we need to
> create a workqueue. (Why do we do it here? Because most fd's don't
> do DIO, so we don't want to waste resources unnecessarily. Why don't
> we fix it by adding a mutex? Because it would slow down *all* Direct
> I/O operations just to suppress a rare, false positive, lockdep
> report.)
>
> The reason why it's so hard for lockdep to reproduce is because it's
> extremely rare for this situation to get hit. When it does get hit,
> several CPU's will try to create the workqueue, and all but one will
> lose the cmpxchg, and so all but one will need to destroy the
> workqueue which they had just created. Since the wq in question has
> never been used, it's safe to call destroy_workqueue(wq) while holding
> the inode mutex --- but lockdep doesn't know this. As I pointed out
> in [1] one way to fix this is to create a new API and use it instead:
>
> I_solemnly_swear_this_workqueue_has_never_been_used_please_destroy()
>
> [1] https://lore.kernel.org/patchwork/patch/1003553/#1187773
>
> Unfortunately, this trades off fixing a very low probability lockdep
> false positive report that in practice only gets hit with Syzkaller,
> with making the code more fragile if the developer potentially uses
> the API incorrectly.
>
> As you can see from the date on the discussion, it's been over six
> months, and there still hasn't been a decision about the best way to
> fix this. I think the real problem is that it's pretty low priority,
> since it's only something that Syzkaller notices.
>
> The reality is in a world without Syzkaller, maybe once a decade, it
> would get hit on a real-life workload, and so we'd have to close a bug
> report with "Won't Fix; Not reproducible", and add a note saying that
> it's a false positive lockdep report. Syzkaller is adding stress to
> the system by demanding perfection from lockdep, and it isn't that,
> for better or for worse. ‾\_(ツ)_/‾ The question is what is the best
> way to annotate this as a false positive, so we can suppress the
> report, either in Lockdep or in Syzkaller.
Hi Ted,
Thanks for the analysis.
So we can classify the reason for wrong bisection result as "too hard
to trigger bug".
Re lockdep perfection, do you see any better alternative then what is
happening now?
One alternative is obviously to stop testing kernel which would remove
all related stress from all involved parties and remove any
perfection/quality requirement from everything :)
But it does not look like a better path forward.
Re I_solemnly_swear_this_workqueue_has_never_been_used_please_destroy,
I wonder if it's possible to automatically note the fact that the
workqueue was used. It should not make the code more fragile and
should not use to incorrect uses of API. It can slightly move the
situation from "reporting false positives" to "not reporting true
positives", but all bugs should be detected eventually (we just need
any test where a single item was submitted to the queue). And in my
experience not reporting false positives is much more important then
reporting 100% of true positives.
Something along the lines of:
on submission of an item:
#ifdef LOCKDEP
WRITE_ONCE(wq->was_used, 1);
#endif
in flush:
#ifdef LOCKDEP
if (READ_ONCE(wq->was_used) {
lock_map_acquire(&wq->lockdep_map);
lock_map_release(&wq->lockdep_map);
}
#endif
Powered by blists - more mailing lists