[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YkvlRnO3Ho/mrk0V@infradead.org>
Date: Mon, 4 Apr 2022 23:44:22 -0700
From: Christoph Hellwig <hch@...radead.org>
To: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Cc: Christoph Hellwig <hch@...radead.org>,
syzbot <syzbot+4f1a237abaf14719db49@...kaller.appspotmail.com>,
axboe@...nel.dk, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, syzkaller-bugs@...glegroups.com,
Jan Kara <jack@...e.cz>
Subject: Re: [syzbot] INFO: task can't die in blkdev_common_ioctl
On Mon, Apr 04, 2022 at 02:12:14PM +0900, Tetsuo Handa wrote:
> On 2022/04/04 13:58, Christoph Hellwig wrote:
> > all, as it does not come through blkdev_fallocate.
>
> My patch proposes filemap_invalidate_lock_killable() and converts only
> blkdev_fallocate() case as a starting point. Nothing prevents us from
> converting e.g. blk_ioctl_zeroout() case as well. The "not come through
> blkdev_fallocate" is bogus.
Sure, we could try to convert most of the > 50 instances of
filemap_invalidate_lock to be killable. But that:
a) isn't what your patch actuall did
b) doesn't solve the underlying issue that is wasn't designed to to be
held over very extremely long running operations
Or to get back to what I said before - I don't think we can just hold
the lock over manually zeroing potentially gigabytes of blocks.
In other words: we'll need to chunk the zeroing up if we want
to hold the invalidate lock, I see no ther way to properly fix this.
Powered by blists - more mailing lists