[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48F395AA.30208@redhat.com>
Date: Mon, 13 Oct 2008 14:38:34 -0400
From: Chris Snook <csnook@...hat.com>
To: Jörn Engel <joern@...fs.org>
CC: Stefan Monnier <monnier@....umontreal.ca>,
linux-kernel@...r.kernel.org
Subject: Re: Filesystem for block devices using flash storage?
Jörn Engel wrote:
> On Mon, 13 October 2008 13:30:29 -0400, Chris Snook wrote:
>>>> logfs tries to solve the write amplification problem by forcing all write
>>>> activity to be sequential. I'm not sure how mature it is.
>>> Still under development. What exactly do you mean by the write
>>> amplification problem?
>> Write amplification is where a 512 byte write turns into a 128k write,
>> due to erase block size.
>
> Ah, yes. Current logfs still triggers that a bit too often. I'm
> currently working on the format changes to avoid the amplification as
> much as possible.
>
> Another nasty side effect of this is that heuristics for wear leveling
> are always imprecise. And wear leveling is still required for most
> devices. See http://www.linuxconf.eu/2007/papers/Engel.pdf
>
>> Intel is claiming a write amplification factor of 1.1. Either they're
>> using very small erase blocks, or doing something very smart in the
>> controller.
>
> With very small erase blocks the facter should be either 1 or 2, not
> 1.1. Most likely they work very much like logfs does, essentially doing
> the whole log-structured thing internally.
>
> Jörn
>
As I understand it, they mean that in a real-world workload that writes 1x data,
a total of 1.1x is written on flash. Real-world writes are usually, but not
always, larger than a single sector. Of course, the validity of this number
depends greatly on the test.
If someone has more info on the Intel devices, please clue me in.
-- Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists