[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210419152008.GD2531743@casper.infradead.org>
Date: Mon, 19 Apr 2021 16:20:08 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Jan Kara <jack@...e.cz>
Cc: linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org,
linux-xfs@...r.kernel.org, Ted Tso <tytso@....edu>,
Christoph Hellwig <hch@...radead.org>,
Amir Goldstein <amir73il@...il.com>,
Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH 0/7 RFC v3] fs: Hole punch vs page cache filling races
On Tue, Apr 13, 2021 at 01:28:44PM +0200, Jan Kara wrote:
> Also when writing the documentation I came across one question: Do we mandate
> i_mapping_sem for truncate + hole punch for all filesystems or just for
> filesystems that support hole punching (or other complex fallocate operations)?
> I wrote the documentation so that we require every filesystem to use
> i_mapping_sem. This makes locking rules simpler, we can also add asserts when
> all filesystems are converted. The downside is that simple filesystems now pay
> the overhead of the locking unnecessary for them. The overhead is small
> (uncontended rwsem acquisition for truncate) so I don't think we care and the
> simplicity is worth it but I wanted to spell this out.
What do we actually get in return for supporting these complex fallocate
operations? Someone added them for a reason, but does that reason
actually benefit me? Other than running xfstests, how many times has
holepunch been called on your laptop in the last week? I don't want to
incur even one extra instruction per I/O operation to support something
that happens twice a week; that's a bad tradeoff.
Can we implement holepunch as a NOP? Or return -ENOTTY? Those both
seem like better solutions than adding an extra rwsem to every inode.
Failing that, is there a bigger hammer we can use on the holepunch side
(eg preventing all concurrent accesses while the holepunch is happening)
to reduce the overhead on the read side?
Powered by blists - more mailing lists