[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241016052758.3400359-1-yi.sun@unisoc.com>
Date: Wed, 16 Oct 2024 13:27:56 +0800
From: Yi Sun <yi.sun@...soc.com>
To: <chao@...nel.org>
CC: <jaegeuk@...nel.org>, <linux-f2fs-devel@...ts.sourceforge.net>,
<linux-kernel@...r.kernel.org>, <yi.sun@...soc.com>,
<sunyibuaa@...il.com>, <niuzhiguo84@...il.com>,
<hao_hao.wang@...soc.com>, <ke.wang@...soc.com>
Subject: [RFC PATCH 0/2] Speed up f2fs truncate
Deleting large files is time-consuming, and a large part
of the time is spent in f2fs_invalidate_blocks()
->down_write(sit_info->sentry_lock) and up_write().
If some blocks are continuous and belong to the same segment,
we can process these blocks at the same time. This can reduce
the number of calls to the down_write() and the up_write(),
thereby improving the overall speed of doing truncate.
Test steps:
Set the CPU and DDR frequencies to the maximum.
dd if=/dev/random of=./test.txt bs=1M count=100000
sync
rm test.txt
Time Comparison of rm:
original optimization ratio
7.17s 3.27s 54.39%
Hi, currently I have only optimized the f2fs doing truncate route,
and other functions using f2fs_invalidate_blocks() are not taken
into consideration. So new function
f2fs_invalidate_compress_pages_range() and
check_f2fs_invalidate_consecutive_blocks() are not general functions.
Is this modification acceptable?
Yi Sun (2):
f2fs: introduce update_sit_entry_for_release()
f2fs: introduce f2fs_invalidate_consecutive_blocks() for truncate
fs/f2fs/compress.c | 14 ++++++
fs/f2fs/f2fs.h | 5 ++
fs/f2fs/file.c | 34 ++++++++++++-
fs/f2fs/segment.c | 116 +++++++++++++++++++++++++++++++--------------
4 files changed, 133 insertions(+), 36 deletions(-)
--
2.25.1
Powered by blists - more mailing lists