[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260204213713.GD31420@macsyma.lan>
Date: Wed, 4 Feb 2026 16:37:13 -0500
From: "Theodore Tso" <tytso@....edu>
To: Mario Lohajner <mario_lohajner@...ketmail.com>
Cc: Andreas Dilger <adilger@...ger.ca>, linux-ext4@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] ext4: add optional rotating block allocation policy
On Wed, Feb 04, 2026 at 12:07:57PM +0100, Mario Lohajner wrote:
>
> Yes, the main motive for this allocator is flash wear leveling,
> but it is not strictly a wear leveling mechanism, and it is not named
> as such for a reason.
If the device needs such a flash wear leveling scheme, it's very
likely that it's not going to work very well for ext4, because there
will be *far* more writes to statially located metadata --- the
superblock, inode table, allocation bitmaps, which are scattered
across the LBA space, --- that will potentially becausing problem to
such a flash device.
In practice, even the simplest Flash Translation Layer implementations
do not require this, so I question whether devices that would need
this exist in practice. Even the cheapest flash devices, for low-cost
mobile and digital cameras, have not needed this in the 30 plus years
that commercial flash storage have been around, and the
micro-controllers which implement the FTL have been getting more
sophisticated, not less. Do you have a specific flash storage device
where this would be helpful? Or this a hypothetical exercise?
> This policy helps avoid allocation hotspots at mount start by
> distributing allocations sequentially across the entire mount,
> not just a file or allocation stream.
Why are you worrying about allocation hotspots? What's the high level
problem that you are trying to address, if it is not about wear
leveling?
> At the block/group allocation level, the file system is fairly stochastic
> and timing-sensitive. Rather than providing raw benchmark data, I prefer
> to explain the design analytically:
Whether you use raw benchmarks or try to do thought experiments you
really need to specify your assumptions about the nature of (a) the
storage device, and (b) the workload. For example, if the flash
device has such a primitive, terible flash translation that the file
system needs to handle wear levelling, it's generally the cheapest,
most trashy storage device that can be imagined. In those cases, the
bottleneck will likely be read/write speed. So we probably don't need
to worry about the block allocate performance while writing to this
storage device, because the I/O throughput latency is probably
comparable to the worst possible USB thumb drive that you might find
in the checkout line of a drug store.
>From the workload perforamnce, how many files are you expecting that
system will be writing in parallel? For example, is the user going to
be running "make -j32" while building some software project? Probably
not, because why would connect a really powerful AMD Threadripper CPU
to the cheapest possible trash flash device? That
would be a system that would be very out of balance. But if this is
going to be low-demand, low-power performacne, then you might be able
to use an even simpler allocator --- say, like what FAT file system
uses.
Speaking of FAT, depending on the quality of the storage device and
benchmark, perhaps another file system would be a better choice. In
addition to FAT, another file system to consider is f2fs, which is a
log-structured file system that avoids the static inode table which
might be a problem with with a flash device that needs file system
aware wear-leveling.
> Of course, this is not optimal for classic HDDs, but NVMe drives behave
> differently.
I'm not aware of *any* NVMe devices that that would find this to be
advantages. This is where some real benchmarks with real hardware,
and with specific workload that is used in real world devices would be
really helpful.
Cheers,
- Ted
Powered by blists - more mailing lists