[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHJ8P3LddwapGLV5nKmw1ULTm7qp5OMk-kFBK3s_UptPPeph7w@mail.gmail.com>
Date: Tue, 5 Nov 2024 19:02:25 +0800
From: Zhiguo Niu <niuzhiguo84@...il.com>
To: Chao Yu <chao@...nel.org>
Cc: Zhiguo Niu <zhiguo.niu@...soc.com>, jaegeuk@...nel.org,
linux-f2fs-devel@...ts.sourceforge.net, linux-kernel@...r.kernel.org,
ke.wang@...soc.com, Hao_hao.Wang@...soc.com
Subject: Re: [PATCH V2] f2fs: fix to adjust appropriate length for fiemap
Chao Yu <chao@...nel.org> 于2024年11月5日周二 18:39写道:
>
> On 2024/11/5 15:28, Zhiguo Niu wrote:
> > Chao Yu <chao@...nel.org> 于2024年11月5日周二 15:04写道:
> >>
> >> On 2024/11/4 9:56, Zhiguo Niu wrote:
> >>> If user give a file size as "length" parameter for fiemap
> >>> operations, but if this size is non-block size aligned,
> >>> it will show 2 segments fiemap results even this whole file
> >>> is contiguous on disk, such as the following results:
> >>>
> >>> ./f2fs_io fiemap 0 19034 ylog/analyzer.py
> >>> Fiemap: offset = 0 len = 19034
> >>> logical addr. physical addr. length flags
> >>> 0 0000000000000000 0000000020baa000 0000000000004000 00001000
> >>> 1 0000000000004000 0000000020bae000 0000000000001000 00001001
> >>>
> >>> after this patch:
> >>> ./f2fs_io fiemap 0 19034 ylog/analyzer.py
> >>> Fiemap: offset = 0 len = 19034
> >>> logical addr. physical addr. length flags
> >>> 0 0000000000000000 00000000315f3000 0000000000005000 00001001
> >>>
> >>> Signed-off-by: Zhiguo Niu <zhiguo.niu@...soc.com>
> >>> ---
> >>> V2: correct commit msg according to Chao's questions
> >>> f2fs_io has been modified for testing, the length for fiemap is
> >>> real file size, not block number
> >>> ---
> >>> fs/f2fs/data.c | 4 ++--
> >>> 1 file changed, 2 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> >>> index 306b86b0..9fc229d 100644
> >>> --- a/fs/f2fs/data.c
> >>> +++ b/fs/f2fs/data.c
> >>> @@ -1966,8 +1966,8 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
> >>> goto out;
> >>> }
> >>>
> >>> - if (bytes_to_blks(inode, len) == 0)
> >>> - len = blks_to_bytes(inode, 1);
> >>> + if (len & (blks_to_bytes(inode, 1) - 1))
> >>> + len = round_up(len, blks_to_bytes(inode, 1));
> >>
> >> How do you think of getting rid of above alignment for len?
> >>
> >>>
> >>> start_blk = bytes_to_blks(inode, start);
> >>> last_blk = bytes_to_blks(inode, start + len - 1);
> >>
> >> And round up end position w/:
> >>
> >> last_blk = bytes_to_blks(inode, round_up(start + len - 1, F2FS_BLKSIZE));
> > Hi Chao,
> > I think this will change the current code logic
> > -------------
> > if (start_blk > last_blk)
> > goto out;
> > -------------
> > for example, a file with size 19006, but the length from the user is 16384.
> > before this modification, last_blk = bytes_to_blks(inode, start +
> > len - 1) = (inode, 16383) = 3
> > after the first f2fs_map_blocks(). start_blk change to be 4,
> > after the second f2fs_map_blocks(), fiemap_fill_nex_exten will be
> > called to fill user parameter and then
> > will goto out because start_blk > last_blk, then fiemap flow finishes.
> > but after this modification, last_blk will be 4
> > will do f2fs_map_blocks() until reach the max_file_blocks(inode)
>
> Yes, you're right, however, w/ this patch, it may change last_blk, e.g.
>
> xfs_io file -c "fiemap -v 0 19006" vs xfs_io file -c "fiemap -v 2 19006"
> start_blk and last_blk will be: 0, 4 and 0, 5.
Hi Chao,
yes, but w/o this patch , the original code still has the same situation??
for example
xfs_io file -c "fiemap -v 0 16384" vs xfs_io file -c "fiemap -v 2 16384"
start_blk and last_blk will be: 0, 3 and 0, 4.
but overall last_blk will change loop counts but has not affect on the results.
>
> Should we round_up len after start_blk & last_blk calculation?
I thinks it is ok ,but just a little bit redundant with the following
handling about len.
if (bytes_to_blks(inode, len) == 0)
len = blks_to_bytes(inode, 1);
Based on the above situation,
do you have any other good suggestions? ^^
thanks!
>
> Thanks,
>
> > thanks!
> >>
> >> Thanks,
> >>
>
Powered by blists - more mailing lists