[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b76e5aaa-edb2-4a4d-a6a8-72f6e975f398@xiaomi.com>
Date: Wed, 25 Jun 2025 09:50:22 +0000
From: Huang Jianan <huangjianan@...omi.com>
To: Chao Yu <chao@...nel.org>, "linux-f2fs-devel@...ts.sourceforge.net"
<linux-f2fs-devel@...ts.sourceforge.net>, "jaegeuk@...nel.org"
<jaegeuk@...nel.org>
CC: 王辉 <wanghui33@...omi.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
盛勇 <shengyong1@...omi.com>
Subject: Re: [External Mail]Re: [PATCH v3] f2fs: avoid splitting bio when
reading multiple pages
On 2025/6/25 17:48, Jianan Huang wrote:
> On 2025/6/25 16:45, Chao Yu wrote:
>>
>> On 6/25/25 14:49, Jianan Huang wrote:
>>> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
>>> to the nr_vecs limit, the compressed pages will be split into multiple
>>> bios and then merged at the block level. In this case, nr_cpages should
>>> be used to pre-allocate bvecs.
>>> To handle this case, align max_nr_pages to cluster_size, which should be
>>> enough for all compressed pages.
>>>
>>> Signed-off-by: Jianan Huang <huangjianan@...omi.com>
>>> Signed-off-by: Sheng Yong <shengyong1@...omi.com>
>>> ---
>>> Changes since v2:
>>> - Initialize index only for compressed files.
>>> Changes since v1:
>>> - Use aligned nr_pages instead of nr_cpages to pre-allocate bvecs.
>>>
>>> fs/f2fs/data.c | 12 ++++++++++--
>>> 1 file changed, 10 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>> index 31e892842625..d071d9f6a811 100644
>>> --- a/fs/f2fs/data.c
>>> +++ b/fs/f2fs/data.c
>>> @@ -2303,7 +2303,7 @@ int f2fs_read_multi_pages(struct compress_ctx
>>> *cc, struct bio **bio_ret,
>>> }
>>>
>>> if (!bio) {
>>> - bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
>>> + bio = f2fs_grab_read_bio(inode, blkaddr,
>>> nr_pages - i,
>>
>> Jianan,
>>
>> Another case:
>>
>> read page #0,1,2,3 from block #1000,1001,1002, cluster_size=4.
>>
>> nr_pages=4
>> max_nr_pages=round_up(0+4,4)-round_down(0,4)=4
>>
>> f2fs_mpage_readpages() calls f2fs_read_multi_pages() when nr_pages=1, at
>> that time, max_nr_pages equals to 1 as well.
>>
>> f2fs_grab_read_bio(..., 1 - 0,...) allocate bio w/ 1 vec capacity,
>> however,
>> we need at least 3 vecs to merge all cpages, right?
>>
>
> Hi, chao,
>
> If we don't align nr_pages, then when entering f2fs_read_multi_pages,
> we have nr_pages pages left, which belong to other clusters.
> If this is the last page, we can simply pass nr_pages = 0.
>
> When allocating bio, we need:
> 1. The cpages remaining in the current cluster, which should be
> (nr_capges - i).
> 2. The maximum cpages remaining in other clusters, which should be
> max(nr_pages, cc->nr_cpages).
>
align(nr_pages, cc->nr_cpages), sorry for this.
> So (nr_capges - i) + max(nr_pages, nr_cpages), should be enough for all
> vecs?
>
> Thanks,
>
>
>> Thanks,
>>
>>> f2fs_ra_op_flags(rac),
>>> folio->index, for_write);
>>> if (IS_ERR(bio)) {
>>> @@ -2376,6 +2376,14 @@ static int f2fs_mpage_readpages(struct inode
>>> *inode,
>>> unsigned max_nr_pages = nr_pages;
>>> int ret = 0;
>>>
>>> +#ifdef CONFIG_F2FS_FS_COMPRESSION
>>> + if (f2fs_compressed_file(inode)) {
>>> + index = rac ? readahead_index(rac) : folio->index;
>>> + max_nr_pages = round_up(index + nr_pages,
>>> cc.cluster_size) -
>>> + round_down(index, cc.cluster_size);
>>> + }
>>> +#endif
>>> +
>>> map.m_pblk = 0;
>>> map.m_lblk = 0;
>>> map.m_len = 0;
>>> @@ -2385,7 +2393,7 @@ static int f2fs_mpage_readpages(struct inode
>>> *inode,
>>> map.m_seg_type = NO_CHECK_TYPE;
>>> map.m_may_create = false;
>>>
>>> - for (; nr_pages; nr_pages--) {
>>> + for (; nr_pages; nr_pages--, max_nr_pages--) {
>>> if (rac) {
>>> folio = readahead_folio(rac);
>>> prefetchw(&folio->flags);
>>
>
Powered by blists - more mailing lists