lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190323135619.GB5675@mit.edu>
Date:   Sat, 23 Mar 2019 09:56:19 -0400
From:   "Theodore Ts'o" <tytso@....edu>
To:     Dmitry Vyukov <dvyukov@...gle.com>
Cc:     syzbot <syzbot+5cd33f0e6abe2bb3e397@...kaller.appspotmail.com>,
        Andreas Dilger <adilger.kernel@...ger.ca>,
        linux-ext4@...r.kernel.org,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Al Viro <viro@...iv.linux.org.uk>
Subject: Re: possible deadlock in __generic_file_fsync

On Sat, Mar 23, 2019 at 08:16:36AM +0100, Dmitry Vyukov wrote:
> 
> This is a lockdep-detected bug, but it is reproduced with very low
> probability...
> 
> I would expect that for lockdep it's only enough to trigger each path
> involved in the deadlock once. Why is it so hard to reproduce then? Is
> it something to improve in lockdep?

It's a false positive report.  The problem is that without doing some
fairly deep code analysis --- the kind that a human needs to do; this
is not the kind of thing that ML and adjusting weights in a nueral net
can deal with --- a computer can't determine what the completion
handler will need to do.

The root cause here is that we have N CPU's that are trying to do
direct I/O's, and on the very first DIO write for a fd, we need to
create a workqueue.  (Why do we do it here?  Because most fd's don't
do DIO, so we don't want to waste resources unnecessarily.  Why don't
we fix it by adding a mutex?  Because it would slow down *all* Direct
I/O operations just to suppress a rare, false positive, lockdep
report.)

The reason why it's so hard for lockdep to reproduce is because it's
extremely rare for this situation to get hit.  When it does get hit,
several CPU's will try to create the workqueue, and all but one will
lose the cmpxchg, and so all but one will need to destroy the
workqueue which they had just created.  Since the wq in question has
never been used, it's safe to call destroy_workqueue(wq) while holding
the inode mutex --- but lockdep doesn't know this.  As I pointed out
in [1] one way to fix this is to create a new API and use it instead:

   I_solemnly_swear_this_workqueue_has_never_been_used_please_destroy()

[1] https://lore.kernel.org/patchwork/patch/1003553/#1187773

Unfortunately, this trades off fixing a very low probability lockdep
false positive report that in practice only gets hit with Syzkaller,
with making the code more fragile if the developer potentially uses
the API incorrectly.

As you can see from the date on the discussion, it's been over six
months, and there still hasn't been a decision about the best way to
fix this.  I think the real problem is that it's pretty low priority,
since it's only something that Syzkaller notices.

The reality is in a world without Syzkaller, maybe once a decade, it
would get hit on a real-life workload, and so we'd have to close a bug
report with "Won't Fix; Not reproducible", and add a note saying that
it's a false positive lockdep report.  Syzkaller is adding stress to
the system by demanding perfection from lockdep, and it isn't that,
for better or for worse.  ‾\_(ツ)_/‾ The question is what is the best
way to annotate this as a false positive, so we can suppress the
report, either in Lockdep or in Syzkaller.

Cheers,

					- Ted

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ