[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Y7L6PJuMVtEJUsj6@casper.infradead.org>
Date: Mon, 2 Jan 2023 15:37:32 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Hillf Danton <hdanton@...a.com>,
syzbot <syzbot+bed15dbf10294aa4f2ae@...kaller.appspotmail.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Waiman Long <longman@...hat.com>,
syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [ntfs3?] INFO: task hung in do_user_addr_fault (3)
On Mon, Jan 02, 2023 at 05:24:24PM +0900, Tetsuo Handa wrote:
> Since no lockdep annotation is used for e.g. PG_locked bit, this deadlock
> cannot be detected by lockdep...
lockdep, unfortunately, cannot track PG_locked. Lockdep requires that
the lock is released by the acquirer, and sometimes that's true for
PG_locked, but when it's used to do I/O, the PG_locked bit is released
in interrupt/BH context. We could maybe fake it by pretending we release
the folio lock when we submit the I/O. Then we'll have to figure out
how to tell lockdep that it's OK to grab the folio lock multiple times
(if within the same inode, ordered by folio->index; if in different
inodes, ordered by in-memory address of those inodes), and that
submitting an I/O will unlock all of the folios in that I/O. Oh, but
there's cases where we only submit part of a folio in an I/O, and the
lock will only be released when all of the I/Os targetting that folio
have been completed.
It's not impossible, but it is a lot of work and needs a lot of
understanding of filesystems/mm/io.
Powered by blists - more mailing lists