[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241030103136.2874140-1-yi.sun@unisoc.com>
Date: Wed, 30 Oct 2024 18:31:31 +0800
From: Yi Sun <yi.sun@...soc.com>
To: <chao@...nel.org>, <jaegeuk@...nel.org>
CC: <yi.sun@...soc.com>, <sunyibuaa@...il.com>,
<linux-f2fs-devel@...ts.sourceforge.net>,
<linux-kernel@...r.kernel.org>, <niuzhiguo84@...il.com>,
<hao_hao.wang@...soc.com>, <ke.wang@...soc.com>
Subject: [PATCH v2 0/5] Speed up f2fs truncate
Deleting large files is time-consuming, and a large part
of the time is spent in f2fs_invalidate_blocks()
->down_write(sit_info->sentry_lock) and up_write().
If some blocks are continuous, we can process these blocks
at the same time. This can reduce the number of calls to
the down_write() and the up_write(), thereby improving the
overall speed of doing truncate.
Test steps:
Set the CPU and DDR frequencies to the maximum.
dd if=/dev/random of=./test.txt bs=1M count=100000
sync
rm test.txt
Time Comparison of rm:
original optimization ratio
7.17s 3.27s 54.39%
Yi Sun (5):
f2fs: blocks need to belong to the same segment when using
update_sit_entry()
f2fs: expand f2fs_invalidate_compress_page() to
f2fs_invalidate_compress_pages_range()
f2fs: add parameter @len to f2fs_invalidate_internal_cache()
f2fs: add parameter @len to f2fs_invalidate_blocks()
f2fs: Optimize f2fs_truncate_data_blocks_range()
fs/f2fs/compress.c | 11 +++---
fs/f2fs/data.c | 2 +-
fs/f2fs/f2fs.h | 16 +++++----
fs/f2fs/file.c | 78 ++++++++++++++++++++++++++++++++++++++----
fs/f2fs/gc.c | 2 +-
fs/f2fs/node.c | 4 +--
fs/f2fs/segment.c | 84 +++++++++++++++++++++++++++++++++++++++-------
7 files changed, 161 insertions(+), 36 deletions(-)
--
2.25.1
Powered by blists - more mailing lists