[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <055101dc2d25$7e068a70$7a139f50$@samsung.com>
Date: Wed, 24 Sep 2025 16:33:16 +0900
From: "Yunji Kang" <yunji0.kang@...sung.com>
To: "'Chao Yu'" <chao@...nel.org>, <jaegeuk@...nel.org>
Cc: <linux-f2fs-devel@...ts.sourceforge.net>,
<linux-kernel@...r.kernel.org>, "'Sungjong Seo'" <sj1557.seo@...sung.com>,
"'Sunmin Jeong'" <s_min.jeong@...sung.com>
Subject: RE: [PATCH v2] f2fs: readahead node blocks in
F2FS_GET_BLOCK_PRECACHE mode
> On 9/24/25 12:17, Yunji Kang wrote:
> >>> In f2fs_precache_extents(), For large files, It requires reading
> >>> many node blocks. Instead of reading each node block with
> >>> synchronous I/O, this patch applies readahead so that node blocks
> >>> can be fetched in advance.
> >>>
> >>> It reduces the overhead of repeated sync reads and improves
> >>> efficiency when precaching extents of large files.
> >>>
> >>> I created a file with the same largest extent and executed the test.
> >>> For this experiment, I set the file's largest extent with an offset
> >>> of
> >>> 0 and a size of 1GB. I configured the remaining area with 100MB
> extents.
> >>>
> >>> 5GB test file:
> >>> dd if=/dev/urandom of=test1 bs=1m count=5120 cp test1 test2 fsync
> >>> test1 dd if=test1 of=test2 bs=1m skip=1024 seek=1024 count=100
> >>> conv=notrunc dd if=test1 of=test2 bs=1m skip=1224 seek=1224
> >>> count=100 conv=notrunc ...
> >>> dd if=test1 of=test2 bs=1m skip=5024 seek=5024 count=100
> >>> conv=notrunc reboot
> >>>
> >>> I also created 10GB and 20GB files with large extents using the same
> >>> method.
> >>>
> >>> ioctl(F2FS_IOC_PRECACHE_EXTENTS) test results are as follows:
> >>> +-----------+---------+---------+-----------+
> >>> | File size | Before | After | Reduction |
> >>> +-----------+---------+---------+-----------+
> >>> | 5GB | 101.8ms | 72.1ms | 29.2% |
> >>> | 10GB | 222.9ms | 149.5ms | 32.9% |
> >>> | 20GB | 446.2ms | 276.3ms | 38.1% |
> >>> +-----------+---------+---------+-----------+
> >>
> >> Yunji,
> >>
> >> Will we gain better performance if we readahead more node pages w/
> >> sychronous request for precache extent case? Have you tried that?
> >>
> >> Thanks,
> >>
> >
> > Does “readahead more node pages” mean removing this condition?
> > " offset[i - 1] % MAX_RA_NODE == 0 "
>
> Actually, I meant a) remove "offset[i - 1] % MAX_RA_NODE == 0" or b)
> increase MAX_RA_NODE.
>
> Also, maybe we can try as below to trigger synchronous IO for such high
> determinacy read.
>
> void df2fs_ra_node_page()
> {
> ...
> err = read_node_folio(afolio, 0);
> ...
> }
>
I’m not sure if I fully understood, but does this mean that in the case of precache, the readahead nodes are handled with sync reads?
With the current code, it seems difficult to implement this.
> >
> > I originally added the condition to prevent unnecessary readahead
> > requests, but it seems this condition was actually blocking valid
> readahead as well.
> >
> > After removing the condition and running tests, I confirmed that more
> > readahead node pages are being issued.
> >
> > I’ll share the test results along with the improved patch.
>
> It makes sense, thanks for checking this and sharing the result.
>
> Thanks,
>
I tested with the revised v3 code and confirmed that most node pages are now handled by readahead.
(In the v2 version, only about half of nodes were processed with readahead.)
Thank you for your review.
> >
> > Thanks,
> >
> >>> Tested on a 256GB mobile device with an SM8750 chipset.
> >>>
> >>> Reviewed-by: Sungjong Seo <sj1557.seo@...sung.com>
> >>> Reviewed-by: Sunmin Jeong <s_min.jeong@...sung.com>
> >>> Signed-off-by: Yunji Kang <yunji0.kang@...sung.com>
> >>> ---
> >>> v2:
> >>> - Modify the readahead condition check routine for better code
> >>> readability.
> >>> - Update the title from 'node block' to 'node blocks'.
> >>>
> >>> fs/f2fs/data.c | 3 +++
> >>> fs/f2fs/f2fs.h | 1 +
> >>> fs/f2fs/node.c | 5 ++++-
> >>> 3 files changed, 8 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
> >>> 7961e0ddfca3..ab3117e3b24a 100644
> >>> --- a/fs/f2fs/data.c
> >>> +++ b/fs/f2fs/data.c
> >>> @@ -1572,6 +1572,9 @@ int f2fs_map_blocks(struct inode *inode,
> >>> struct
> >> f2fs_map_blocks *map, int flag)
> >>> pgofs = (pgoff_t)map->m_lblk;
> >>> end = pgofs + maxblocks;
> >>>
> >>> + if (flag == F2FS_GET_BLOCK_PRECACHE)
> >>> + mode = LOOKUP_NODE_PRECACHE;
> >>> +
> >>> next_dnode:
> >>> if (map->m_may_create) {
> >>> if (f2fs_lfs_mode(sbi))
> >>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index
> >>> 9d3bc9633c1d..3ce41528d48e 100644
> >>> --- a/fs/f2fs/f2fs.h
> >>> +++ b/fs/f2fs/f2fs.h
> >>> @@ -651,6 +651,7 @@ enum {
> >>> * look up a node with readahead called
> >>> * by get_data_block.
> >>> */
> >>> + LOOKUP_NODE_PRECACHE, /* look up a node for
> >> F2FS_GET_BLOCK_PRECACHE */
> >>> };
> >>>
> >>> #define DEFAULT_RETRY_IO_COUNT 8 /* maximum retry read IO or
> flush
> >> count */
> >>> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c index
> >>> 4254db453b2d..d4bf3ce715c5 100644
> >>> --- a/fs/f2fs/node.c
> >>> +++ b/fs/f2fs/node.c
> >>> @@ -860,7 +860,10 @@ int f2fs_get_dnode_of_data(struct dnode_of_data
> >>> *dn,
> >> pgoff_t index, int mode)
> >>> set_nid(parent, offset[i - 1], nids[i], i == 1);
> >>> f2fs_alloc_nid_done(sbi, nids[i]);
> >>> done = true;
> >>> - } else if (mode == LOOKUP_NODE_RA && i == level && level > 1)
> >> {
> >>> + } else if ((i == level && level > 1) &&
> >>> + (mode == LOOKUP_NODE_RA ||
> >>> + (mode == LOOKUP_NODE_PRECACHE &&
> >>> + offset[i - 1] % MAX_RA_NODE == 0))) {
> >>> nfolio[i] = f2fs_get_node_folio_ra(parent, offset[i -
> >> 1]);
> >>> if (IS_ERR(nfolio[i])) {
> >>> err = PTR_ERR(nfolio[i]);
> >
> >
> >
Powered by blists - more mailing lists