[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <e6333d2d-cc30-44d3-8f23-6a6c5ea0134d@huaweicloud.com>
Date: Fri, 18 Jul 2025 19:30:10 +0800
From: Zhang Yi <yi.zhang@...weicloud.com>
To: Brian Foster <bfoster@...hat.com>
Cc: linux-fsdevel@...r.kernel.org, linux-xfs@...r.kernel.org,
linux-mm@...ck.org, hch@...radead.org, willy@...radead.org,
"Darrick J. Wong" <djwong@...nel.org>,
Ext4 Developers List <linux-ext4@...r.kernel.org>
Subject: Re: [PATCH v3 3/7] iomap: optional zero range dirty folio processing
On 2025/7/15 13:22, Darrick J. Wong wrote:
> On Mon, Jul 14, 2025 at 04:41:18PM -0400, Brian Foster wrote:
>> The only way zero range can currently process unwritten mappings
>> with dirty pagecache is to check whether the range is dirty before
>> mapping lookup and then flush when at least one underlying mapping
>> is unwritten. This ordering is required to prevent iomap lookup from
>> racing with folio writeback and reclaim.
>>
>> Since zero range can skip ranges of unwritten mappings that are
>> clean in cache, this operation can be improved by allowing the
>> filesystem to provide a set of dirty folios that require zeroing. In
>> turn, rather than flush or iterate file offsets, zero range can
>> iterate on folios in the batch and advance over clean or uncached
>> ranges in between.
>>
>> Add a folio_batch in struct iomap and provide a helper for fs' to
>
> /me confused by the single quote; is this supposed to read:
>
> "...for the fs to populate..."?
>
> Either way the code changes look like a reasonable thing to do for the
> pagecache (try to grab a bunch of dirty folios while XFS holds the
> mapping lock) so
>
> Reviewed-by: "Darrick J. Wong" <djwong@...nel.org>
>
> --D
>
>
>> populate the batch at lookup time. Update the folio lookup path to
>> return the next folio in the batch, if provided, and advance the
>> iter if the folio starts beyond the current offset.
>>
>> Signed-off-by: Brian Foster <bfoster@...hat.com>
>> Reviewed-by: Christoph Hellwig <hch@....de>
>> ---
>> fs/iomap/buffered-io.c | 89 +++++++++++++++++++++++++++++++++++++++---
>> fs/iomap/iter.c | 6 +++
>> include/linux/iomap.h | 4 ++
>> 3 files changed, 94 insertions(+), 5 deletions(-)
>>
>> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
>> index 38da2fa6e6b0..194e3cc0857f 100644
>> --- a/fs/iomap/buffered-io.c
>> +++ b/fs/iomap/buffered-io.c
[...]
>> @@ -1398,6 +1452,26 @@ static int iomap_zero_iter(struct iomap_iter *iter, bool *did_zero)
>> return status;
>> }
>>
>> +loff_t
>> +iomap_fill_dirty_folios(
>> + struct iomap_iter *iter,
>> + loff_t offset,
>> + loff_t length)
>> +{
>> + struct address_space *mapping = iter->inode->i_mapping;
>> + pgoff_t start = offset >> PAGE_SHIFT;
>> + pgoff_t end = (offset + length - 1) >> PAGE_SHIFT;
>> +
>> + iter->fbatch = kmalloc(sizeof(struct folio_batch), GFP_KERNEL);
>> + if (!iter->fbatch)
Hi, Brian!
I think ext4 needs to be aware of this failure after it converts to use
iomap infrastructure. It is because if we fail to add dirty folios to the
fbatch, iomap_zero_range() will flush those unwritten and dirty range.
This could potentially lead to a deadlock, as most calls to
ext4_block_zero_page_range() occur under an active journal handle.
Writeback operations under an active journal handle may result in circular
waiting within journal transactions. So please return this error code, and
then ext4 can interrupt zero operations to prevent deadlock.
Thanks,
Yi.
>> + return offset + length;
>> + folio_batch_init(iter->fbatch);
>> +
>> + filemap_get_folios_dirty(mapping, &start, end, iter->fbatch);
>> + return (start << PAGE_SHIFT);
>> +}
>> +EXPORT_SYMBOL_GPL(iomap_fill_dirty_folios);
Powered by blists - more mailing lists