[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4dd51853-1707-8d5f-1286-baa3126c162b@huaweicloud.com>
Date: Fri, 19 Apr 2024 16:14:49 +0800
From: Zhang Yi <yi.zhang@...weicloud.com>
To: Chandan Babu R <chandanbabu@...nel.org>
Cc: linux-xfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, djwong@...nel.org, hch@...radead.org,
brauner@...nel.org, david@...morbit.com, tytso@....edu, jack@...e.cz,
yi.zhang@...wei.com, chengzhihao1@...wei.com, yukuai3@...wei.com
Subject: Re: [PATCH v4 6/9] iomap: don't increase i_size if it's not a write
operation
On 2024/4/19 14:07, Chandan Babu R wrote:
> On Wed, Mar 20, 2024 at 07:05:45 PM +0800, Zhang Yi wrote:
>> From: Zhang Yi <yi.zhang@...wei.com>
>>
>> Increase i_size in iomap_zero_range() and iomap_unshare_iter() is not
>> needed, the caller should handle it. Especially, when truncate partial
>> block, we should not increase i_size beyond the new EOF here. It doesn't
>> affect xfs and gfs2 now because they set the new file size after zero
>> out, it doesn't matter that a transient increase in i_size, but it will
>> affect ext4 because it set file size before truncate. So move the i_size
>> updating logic to iomap_write_iter().
>>
>
> This patch causes generic/522 to consistently fail when using the following
> fstest configuration,
>
> TEST_DEV=/dev/loop16
> TEST_LOGDEV=/dev/loop13
> SCRATCH_DEV_POOL="/dev/loop5 /dev/loop6 /dev/loop7 /dev/loop8 /dev/loop9 /dev/loop10 /dev/loop11 /dev/loop12"
> MKFS_OPTIONS='-f -m reflink=1,rmapbt=1, -i sparse=1, -lsize=1g'
> MOUNT_OPTIONS='-o usrquota,grpquota,prjquota'
> TEST_FS_MOUNT_OPTS="$TEST_FS_MOUNT_OPTS -o usrquota,grpquota,prjquota"
> TEST_FS_MOUNT_OPTS="-o logdev=/dev/loop13"
> SCRATCH_LOGDEV=/dev/loop15
> USE_EXTERNAL=yes
> LOGWRITES_DEV=/dev/loop15
>
Sorry for the regression, I didn't notice this issue last time I ran
this test. It looks like the 004 patch doesn't work for some cases.
I can reproduce it on my machine now, and I'll take a look at this
and fix it soon.
Thanks,
Yi.
Powered by blists - more mailing lists