[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20251230173845.2310677-1-daeho43@gmail.com>
Date: Tue, 30 Dec 2025 09:38:45 -0800
From: Daeho Jeong <daeho43@...il.com>
To: linux-kernel@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net,
kernel-team@...roid.com
Cc: Daeho Jeong <daehojeong@...gle.com>
Subject: [PATCH] f2fs: flush plug periodically during GC to maximize readahead effect
From: Daeho Jeong <daehojeong@...gle.com>
During the garbage collection process, F2FS submits readahead I/Os for
valid blocks. However, since the GC loop runs within a single plug scope
without intermediate flushing, these readahead I/Os often accumulate in
the block layer's plug list instead of being dispatched to the device
immediately.
Consequently, when the GC thread attempts to lock the page later, the
I/O might not have completed (or even started), leading to a "read try
and wait" scenario. This negates the benefit of readahead and causes
unnecessary delays in GC latency.
This patch addresses this issue by introducing an intermediate
blk_finish_plug() and blk_start_plug() pair within the GC loop. This
forces the dispatch of pending I/Os, ensuring that readahead pages are
fetched in time, thereby reducing GC latency.
Signed-off-by: Daeho Jeong <daehojeong@...gle.com>
---
fs/f2fs/gc.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
index 384fa7e2085b..8ffc3d4f7989 100644
--- a/fs/f2fs/gc.c
+++ b/fs/f2fs/gc.c
@@ -1031,7 +1031,8 @@ static int check_valid_map(struct f2fs_sb_info *sbi,
* ignore that.
*/
static int gc_node_segment(struct f2fs_sb_info *sbi,
- struct f2fs_summary *sum, unsigned int segno, int gc_type)
+ struct f2fs_summary *sum, unsigned int segno, int gc_type,
+ struct blk_plug *plug)
{
struct f2fs_summary *entry;
block_t start_addr;
@@ -1100,8 +1101,11 @@ static int gc_node_segment(struct f2fs_sb_info *sbi,
stat_inc_node_blk_count(sbi, 1, gc_type);
}
- if (++phase < 3)
+ if (++phase < 3) {
+ blk_finish_plug(plug);
+ blk_start_plug(plug);
goto next_step;
+ }
if (fggc)
atomic_dec(&sbi->wb_sync_req[NODE]);
@@ -1535,7 +1539,7 @@ static int move_data_page(struct inode *inode, block_t bidx, int gc_type,
*/
static int gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
struct gc_inode_list *gc_list, unsigned int segno, int gc_type,
- bool force_migrate)
+ bool force_migrate, struct blk_plug *plug)
{
struct super_block *sb = sbi->sb;
struct f2fs_summary *entry;
@@ -1703,8 +1707,11 @@ static int gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
}
}
- if (++phase < 5)
+ if (++phase < 5) {
+ blk_finish_plug(plug);
+ blk_start_plug(plug);
goto next_step;
+ }
return submitted;
}
@@ -1853,11 +1860,11 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
*/
if (type == SUM_TYPE_NODE)
submitted += gc_node_segment(sbi, sum->entries,
- cur_segno, gc_type);
+ cur_segno, gc_type, &plug);
else
submitted += gc_data_segment(sbi, sum->entries,
gc_list, cur_segno,
- gc_type, force_migrate);
+ gc_type, force_migrate, &plug);
stat_inc_gc_seg_count(sbi, data_type, gc_type);
sbi->gc_reclaimed_segs[sbi->gc_mode]++;
--
2.52.0.351.gbe84eed79e-goog
Powered by blists - more mailing lists