[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1b1c5fcf-0c71-4c5d-a429-3decc3bb8cb3@kernel.org>
Date: Wed, 31 Jul 2024 16:18:01 +0800
From: Chao Yu <chao@...nel.org>
To: Jaegeuk Kim <jaegeuk@...nel.org>
Cc: linux-f2fs-devel@...ts.sourceforge.net, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/4] f2fs: atomic: fix to truncate pagecache before
on-disk metadata truncation
On 2024/7/31 0:26, Jaegeuk Kim wrote:
> On 07/30, Chao Yu wrote:
>> On 2024/7/30 0:26, Jaegeuk Kim wrote:
>>> On 07/26, Chao Yu wrote:
>>>> We should always truncate pagecache while truncating on-disk data.
>>>>
>>>> Fixes: a46bebd502fe ("f2fs: synchronize atomic write aborts")
>>>> Signed-off-by: Chao Yu <chao@...nel.org>
>>>> ---
>>>> v2:
>>>> - fix to use cow_inode instead of inode
>>>> fs/f2fs/file.c | 1 +
>>>> 1 file changed, 1 insertion(+)
>>>>
>>>> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
>>>> index 6c62f76474d1..54886ddcb8ab 100644
>>>> --- a/fs/f2fs/file.c
>>>> +++ b/fs/f2fs/file.c
>>>> @@ -2202,6 +2202,7 @@ static int f2fs_ioc_start_atomic_write(struct file *filp, bool truncate)
>>>> F2FS_I(fi->cow_inode)->atomic_inode = inode;
>>>> } else {
>>>> /* Reuse the already created COW inode */
>>>> + truncate_setsize(fi->cow_inode, 0);
>>>
>>> What if the below truncation failed?
>>
>> What about just dropping page cache and do not set isize to 0?
>
> Can we also check if there's any dirty page before truncating it?
Sure, so how about this?
/* Reuse the already created COW inode */
if (get_dirty_pages(fi->cow_inode)) {
ret = filemap_write_and_wait(fi->cow_inode->i_mapping);
if (ret) {
f2fs_up_write(&fi->i_gc_rwsem[WRITE]);
goto out;
}
}
truncate_inode_pages(fi->cow_inode->i_mapping, 0);
ret = f2fs_do_truncate_blocks(fi->cow_inode, 0, true);
if (ret) {
f2fs_up_write(&fi->i_gc_rwsem[WRITE]);
goto out;
}
i_size_write(fi->cow_inode, 0);
Thanks,
>
>>
>> Thanks,
>>
>>>
>>>> ret = f2fs_do_truncate_blocks(fi->cow_inode, 0, true);
>>>> if (ret) {
>>>> f2fs_up_write(&fi->i_gc_rwsem[WRITE]);
>>>> --
>>>> 2.40.1
Powered by blists - more mailing lists