[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c28020f1-e2f2-42e8-9c0c-0ff70ec219cd@kernel.org>
Date: Tue, 30 Dec 2025 17:27:26 +0800
From: Chao Yu <chao@...nel.org>
To: Jeuk Kim <jeuk20.kim@...il.com>, jaegeuk@...nel.org
Cc: chao@...nel.org, Jinyoung Choi <j-young.choi@...sung.com>,
Jeuk Kim <jeuk20.kim@...sung.com>, linux-kernel@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net
Subject: Re: Question: batching block allocation in f2fs DIO path
Hi Jeuk,
On 12/29/2025 2:33 PM, Jeuk Kim wrote:
> Hi F2FS maintainers,
>
> Sorry for the duplicate — I’m resending this because the previous
> message was sent in HTML format.
>
> I’ve been looking into the DIO allocation path in f2fs, specifically
> when a DIO write needs to allocate new blocks (e.g., hole-filling).
> From f2fs_map_blocks() through __allocate_data_block() →
> f2fs_allocate_data_block(), it seems each block allocation is handled
> one-by-one, taking curseg_lock/curseg_mutex and the SIT sentry lock
> per block.
>
> I’m wondering whether batching allocations (a bounded batch, e.g., a
> small run within the current segment) could be feasible in the DIO
> path. My intuition is that with multiple threads doing DIO, reducing
> per-block lock contention and improving sequentiality could help
> throughput.
I agree w/ you.
>
> Questions:
>
> Is there a technical or correctness reason that makes batching for DIO
> infeasible (e.g., LFS/SSR/GC interactions, summary/SIT update
> ordering, etc.)?
>
> Or is this simply an optimization that hasn’t been implemented?
I've implemented a prototype of multiple block allocation for any potential
use cases: pinfile fallocation, direct IO and buffered IO. I can see benefits
from my previous test.
I plan to upstream all implementations, but I think I need more time to clean
up the draft codes and check all corner cases.
You can check the MBA implementation for pinfile use case in below link, I
guess this version is close to upstream.
https://github.com/chaseyu/f2fs-dev/commits/feature/inbatch_write
Thanks,
>
> If this seems acceptable, would you consider patches in this direction?
>
> If there are prior discussions or known issues on this, I’d appreciate pointers.
>
> Thanks for your time.
>
> Best regards,
> Jeuk Kim
Powered by blists - more mailing lists