[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAPjHTeSsvb7UOAn9mWoqXwWTw1J9SEEDo1k=8KVcAxwVsys+Og@mail.gmail.com>
Date: Mon, 29 Dec 2025 15:33:45 +0900
From: Jeuk Kim <jeuk20.kim@...il.com>
To: jaegeuk@...nel.org, Chao Yu <chao@...nel.org>
Cc: Jinyoung Choi <j-young.choi@...sung.com>, Jeuk Kim <jeuk20.kim@...sung.com>,
linux-kernel@...r.kernel.org, linux-f2fs-devel@...ts.sourceforge.net
Subject: Question: batching block allocation in f2fs DIO path
Hi F2FS maintainers,
Sorry for the duplicate — I’m resending this because the previous
message was sent in HTML format.
I’ve been looking into the DIO allocation path in f2fs, specifically
when a DIO write needs to allocate new blocks (e.g., hole-filling).
>From f2fs_map_blocks() through __allocate_data_block() →
f2fs_allocate_data_block(), it seems each block allocation is handled
one-by-one, taking curseg_lock/curseg_mutex and the SIT sentry lock
per block.
I’m wondering whether batching allocations (a bounded batch, e.g., a
small run within the current segment) could be feasible in the DIO
path. My intuition is that with multiple threads doing DIO, reducing
per-block lock contention and improving sequentiality could help
throughput.
Questions:
Is there a technical or correctness reason that makes batching for DIO
infeasible (e.g., LFS/SSR/GC interactions, summary/SIT update
ordering, etc.)?
Or is this simply an optimization that hasn’t been implemented?
If this seems acceptable, would you consider patches in this direction?
If there are prior discussions or known issues on this, I’d appreciate pointers.
Thanks for your time.
Best regards,
Jeuk Kim
Powered by blists - more mailing lists