lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240515082433.24411-1-liaoyuanhong@vivo.com>
Date: Wed, 15 May 2024 16:24:33 +0800
From: Liao Yuanhong <liaoyuanhong@...o.com>
To: Jaegeuk Kim <jaegeuk@...nel.org>,
	Chao Yu <chao@...nel.org>
Cc: bo.wu@...o.com,
	linux-f2fs-devel@...ts.sourceforge.net,
	linux-kernel@...r.kernel.org,
	Liao Yuanhong <liaoyuanhong@...o.com>
Subject: [PATCH] f2fs:modify the entering condition for f2fs_migrate_blocks()

Currently, when we allocating a swap file on zone UFS, this file will
created on conventional UFS. If the swap file size is not aligned with the
zone size, the last extent will enter f2fs_migrate_blocks(), resulting in
significant additional I/O overhead and prolonged lock occupancy. In most
cases, this is unnecessary, because on Conventional UFS, as long as the
start block of the swap file is aligned with zone, it is sequentially
aligned.To circumvent this issue, we have altered the conditions for
entering f2fs_migrate_blocks(). Now, if the start block of the last extent
is aligned with the start of zone, we avoids entering
f2fs_migrate_blocks().

Signed-off-by: Liao Yuanhong <liaoyuanhong@...o.com>
Signed-off-by: Wu Bo <bo.wu@...o.com>
---
 fs/f2fs/data.c | 23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 50ceb25b3..4d58fb6c2 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3925,10 +3925,12 @@ static int check_swap_activate(struct swap_info_struct *sis,
 	block_t pblock;
 	block_t lowest_pblock = -1;
 	block_t highest_pblock = 0;
+	block_t blk_start;
 	int nr_extents = 0;
 	unsigned int nr_pblocks;
 	unsigned int blks_per_sec = BLKS_PER_SEC(sbi);
 	unsigned int not_aligned = 0;
+	unsigned int cur_sec;
 	int ret = 0;
 
 	/*
@@ -3965,23 +3967,39 @@ static int check_swap_activate(struct swap_info_struct *sis,
 		pblock = map.m_pblk;
 		nr_pblocks = map.m_len;
 
-		if ((pblock - SM_I(sbi)->main_blkaddr) % blks_per_sec ||
+		blk_start = pblock - SM_I(sbi)->main_blkaddr;
+
+		if (blk_start % blks_per_sec ||
 				nr_pblocks % blks_per_sec ||
 				!f2fs_valid_pinned_area(sbi, pblock)) {
 			bool last_extent = false;
 
 			not_aligned++;
 
+			cur_sec = (blk_start + nr_pblocks) / BLKS_PER_SEC(sbi);
 			nr_pblocks = roundup(nr_pblocks, blks_per_sec);
-			if (cur_lblock + nr_pblocks > sis->max)
+			if (cur_lblock + nr_pblocks > sis->max) {
 				nr_pblocks -= blks_per_sec;
 
+				/* the start address is aligned to section */
+				if (!(blk_start % blks_per_sec))
+					last_extent = true;
+			}
+
 			/* this extent is last one */
 			if (!nr_pblocks) {
 				nr_pblocks = last_lblock - cur_lblock;
 				last_extent = true;
 			}
 
+			/*
+			 * the last extent which located on conventional UFS doesn't
+			 * need migrate
+			 */
+			if (last_extent && f2fs_sb_has_blkzoned(sbi) &&
+				cur_sec < GET_SEC_FROM_SEG(sbi, first_zoned_segno(sbi)))
+				goto next;
+
 			ret = f2fs_migrate_blocks(inode, cur_lblock,
 							nr_pblocks);
 			if (ret) {
@@ -3994,6 +4012,7 @@ static int check_swap_activate(struct swap_info_struct *sis,
 				goto retry;
 		}
 
+next:
 		if (cur_lblock + nr_pblocks >= sis->max)
 			nr_pblocks = sis->max - cur_lblock;
 
-- 
2.25.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ