[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c9c8b609-68b8-4f44-98eb-8d04e1a270fb@kernel.org>
Date: Thu, 16 May 2024 16:06:34 +0800
From: Chao Yu <chao@...nel.org>
To: Liao Yuanhong <liaoyuanhong@...o.com>, Jaegeuk Kim <jaegeuk@...nel.org>
Cc: bo.wu@...o.com, linux-f2fs-devel@...ts.sourceforge.net,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] f2fs:modify the entering condition for
f2fs_migrate_blocks()
On 2024/5/15 16:24, Liao Yuanhong wrote:
> Currently, when we allocating a swap file on zone UFS, this file will
> created on conventional UFS. If the swap file size is not aligned with the
> zone size, the last extent will enter f2fs_migrate_blocks(), resulting in
> significant additional I/O overhead and prolonged lock occupancy. In most
> cases, this is unnecessary, because on Conventional UFS, as long as the
> start block of the swap file is aligned with zone, it is sequentially
> aligned.To circumvent this issue, we have altered the conditions for
> entering f2fs_migrate_blocks(). Now, if the start block of the last extent
> is aligned with the start of zone, we avoids entering
> f2fs_migrate_blocks().
Hi,
Is it possible that we can pin swapfile, and fallocate on it aligned to
zone size, then mkswap and swapon?
Thanks,
>
> Signed-off-by: Liao Yuanhong <liaoyuanhong@...o.com>
> Signed-off-by: Wu Bo <bo.wu@...o.com>
> ---
> fs/f2fs/data.c | 23 +++++++++++++++++++++--
> 1 file changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 50ceb25b3..4d58fb6c2 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -3925,10 +3925,12 @@ static int check_swap_activate(struct swap_info_struct *sis,
> block_t pblock;
> block_t lowest_pblock = -1;
> block_t highest_pblock = 0;
> + block_t blk_start;
> int nr_extents = 0;
> unsigned int nr_pblocks;
> unsigned int blks_per_sec = BLKS_PER_SEC(sbi);
> unsigned int not_aligned = 0;
> + unsigned int cur_sec;
> int ret = 0;
>
> /*
> @@ -3965,23 +3967,39 @@ static int check_swap_activate(struct swap_info_struct *sis,
> pblock = map.m_pblk;
> nr_pblocks = map.m_len;
>
> - if ((pblock - SM_I(sbi)->main_blkaddr) % blks_per_sec ||
> + blk_start = pblock - SM_I(sbi)->main_blkaddr;
> +
> + if (blk_start % blks_per_sec ||
> nr_pblocks % blks_per_sec ||
> !f2fs_valid_pinned_area(sbi, pblock)) {
> bool last_extent = false;
>
> not_aligned++;
>
> + cur_sec = (blk_start + nr_pblocks) / BLKS_PER_SEC(sbi);
> nr_pblocks = roundup(nr_pblocks, blks_per_sec);
> - if (cur_lblock + nr_pblocks > sis->max)
> + if (cur_lblock + nr_pblocks > sis->max) {
> nr_pblocks -= blks_per_sec;
>
> + /* the start address is aligned to section */
> + if (!(blk_start % blks_per_sec))
> + last_extent = true;
> + }
> +
> /* this extent is last one */
> if (!nr_pblocks) {
> nr_pblocks = last_lblock - cur_lblock;
> last_extent = true;
> }
>
> + /*
> + * the last extent which located on conventional UFS doesn't
> + * need migrate
> + */
> + if (last_extent && f2fs_sb_has_blkzoned(sbi) &&
> + cur_sec < GET_SEC_FROM_SEG(sbi, first_zoned_segno(sbi)))
> + goto next;
> +
> ret = f2fs_migrate_blocks(inode, cur_lblock,
> nr_pblocks);
> if (ret) {
> @@ -3994,6 +4012,7 @@ static int check_swap_activate(struct swap_info_struct *sis,
> goto retry;
> }
>
> +next:
> if (cur_lblock + nr_pblocks >= sis->max)
> nr_pblocks = sis->max - cur_lblock;
>
> --
> 2.25.1
>
Powered by blists - more mailing lists