[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cbc18f90-9c5a-49b0-9c6f-d1fbab0f2df1@kernel.org>
Date: Thu, 4 Jul 2024 16:46:28 +0800
From: Chao Yu <chao@...nel.org>
To: Liao Yuanhong <liaoyuanhong@...o.com>, Jaegeuk Kim <jaegeuk@...nel.org>
Cc: bo.wu@...o.com, linux-f2fs-devel@...ts.sourceforge.net,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] f2fs:Add write priority option based on zone UFS
On 2024/7/2 14:29, Liao Yuanhong wrote:
> Currently, we are using a mix of traditional UFS and zone UFS to support
> some functionalities that cannot be achieved on zone UFS alone. However,
> there are some issues with this approach. There exists a significant
> performance difference between traditional UFS and zone UFS. Under normal
> usage, we prioritize writes to zone UFS. However, in critical conditions
> (such as when the entire UFS is almost full), we cannot determine whether
> data will be written to traditional UFS or zone UFS. This can lead to
> significant performance fluctuations, which is not conducive to
> development and testing. To address this, we have added an option
> zlu_io_enable under sys with the following three modes:
> 1) zlu_io_enable == 0:Normal mode, prioritize writing to zone UFS;
> 2) zlu_io_enable == 1:Zone UFS only mode, only allow writing to zone UFS;
> 3) zlu_io_enable == 2:Traditional UFS priority mode, prioritize writing to
> traditional UFS.
Use blkzone_alloc_policy instead of zlu_io_enable? Not sure.
Should update manual of f2fs sysfs entry.
>
> Signed-off-by: Liao Yuanhong <liaoyuanhong@...o.com>
> Signed-off-by: Wu Bo <bo.wu@...o.com>
> ---
> fs/f2fs/f2fs.h | 5 +++++
> fs/f2fs/segment.c | 23 ++++++++++++++++++++++-
> fs/f2fs/sysfs.c | 13 +++++++++++++
> 3 files changed, 40 insertions(+), 1 deletion(-)
>
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index f7ee6c5e371e..7ba782bd15b2 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -1781,6 +1781,11 @@ struct f2fs_sb_info {
> u64 committed_atomic_block;
> u64 revoked_atomic_block;
>
> +#ifdef CONFIG_BLK_DEV_ZONED
> + /* For adjust the priority writing position of data in zone UFS */
> + unsigned int zlu_io_enable; /* data write mode */
> +#endif
#ifdef CONFIG_BLK_DEV_ZONED
unsigned int blocks_per_blkz; /* F2FS blocks per zone */
unsigned int max_open_zones; /* max open zone resources of the zoned device */
/* For adjust the priority writing position of data in zone UFS */
unsigned int zlu_io_enable; /* data write mode */
#endif
> +
> #ifdef CONFIG_F2FS_FS_COMPRESSION
> struct kmem_cache *page_array_slab; /* page array entry */
> unsigned int page_array_slab_size; /* default page array slab size */
> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> index 4db1add43e36..327457c28700 100644
> --- a/fs/f2fs/segment.c
> +++ b/fs/f2fs/segment.c
> @@ -2686,17 +2686,38 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
> goto got_it;
> }
>
> +#ifdef CONFIG_BLK_DEV_ZONED
> /*
> * If we format f2fs on zoned storage, let's try to get pinned sections
> * from beginning of the storage, which should be a conventional one.
> */
> if (f2fs_sb_has_blkzoned(sbi)) {
> - segno = pinning ? 0 : max(first_zoned_segno(sbi), *newseg);
> + if (sbi->zlu_io_enable == 2) /* Prioritize writing to traditional UFS */
Please use a macro instead of magic number.
> + segno = 0;
> + else
> + segno = pinning ? 0 : max(first_zoned_segno(sbi), *newseg);
> hint = GET_SEC_FROM_SEG(sbi, segno);
> }
> +#endif
>
> find_other_zone:
> secno = find_next_zero_bit(free_i->free_secmap, MAIN_SECS(sbi), hint);
> +
> +#ifdef CONFIG_BLK_DEV_ZONED
> + if (secno >= MAIN_SECS(sbi) && f2fs_sb_has_blkzoned(sbi)) {
> + if (sbi->zlu_io_enable == 1) { /* Write only to zone UFS */
Ditto.
And we'd better initialize it w/ default policy in somewhere.
> + hint = GET_SEC_FROM_SEG(sbi, first_zoned_segno(sbi));
> + secno = find_next_zero_bit(free_i->free_secmap, MAIN_SECS(sbi), hint);
> + } else
> + secno = find_first_zero_bit(free_i->free_secmap,
> + MAIN_SECS(sbi));
> + if (secno >= MAIN_SECS(sbi)) {
> + ret = -ENOSPC;
> + goto out_unlock;
> + }
> + }
> +#endif
> +
> if (secno >= MAIN_SECS(sbi)) {
> secno = find_first_zero_bit(free_i->free_secmap,
> MAIN_SECS(sbi));
> diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
> index fee7ee45ceaa..bc9e5e8bb749 100644
> --- a/fs/f2fs/sysfs.c
> +++ b/fs/f2fs/sysfs.c
> @@ -627,6 +627,13 @@ static ssize_t __sbi_store(struct f2fs_attr *a,
> }
> #endif
>
> +#ifdef CONFIG_BLK_DEV_ZONED
> + if (!strcmp(a->attr.name, "zlu_io_enable")) {
Should check validation of new value?
> + sbi->zlu_io_enable = t;
> + return count;
> + }
> +#endif
> +
> #ifdef CONFIG_F2FS_FS_COMPRESSION
> if (!strcmp(a->attr.name, "compr_written_block") ||
> !strcmp(a->attr.name, "compr_saved_block")) {
> @@ -1006,6 +1013,9 @@ F2FS_SBI_GENERAL_RW_ATTR(dir_level);
> F2FS_SBI_GENERAL_RW_ATTR(iostat_enable);
> F2FS_SBI_GENERAL_RW_ATTR(iostat_period_ms);
> #endif
> +#ifdef CONFIG_BLK_DEV_ZONED
> +F2FS_SBI_GENERAL_RW_ATTR(zlu_io_enable);
#ifdef CONFIG_BLK_DEV_ZONED
F2FS_SBI_GENERAL_RO_ATTR(unusable_blocks_per_sec);
F2FS_SBI_GENERAL_RW_ATTR(zlu_io_enable);
#endif
Thanks,
> +#endif
> F2FS_SBI_GENERAL_RW_ATTR(readdir_ra);
> F2FS_SBI_GENERAL_RW_ATTR(max_io_bytes);
> F2FS_SBI_GENERAL_RW_ATTR(data_io_flag);
> @@ -1153,6 +1163,9 @@ static struct attribute *f2fs_attrs[] = {
> #ifdef CONFIG_F2FS_IOSTAT
> ATTR_LIST(iostat_enable),
> ATTR_LIST(iostat_period_ms),
> +#endif
> +#ifdef CONFIG_BLK_DEV_ZONED
> + ATTR_LIST(zlu_io_enable),
> #endif
> ATTR_LIST(readdir_ra),
> ATTR_LIST(max_io_bytes),
Powered by blists - more mailing lists