[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200703050144.55813.arnd@arndb.de>
Date: Mon, 5 Mar 2007 01:44:54 +0100
From: Arnd Bergmann <arnd@...db.de>
To: Anton Altaparmakov <aia21@....ac.uk>
Cc: Jörn Engel <joern@...ybastard.org>,
Ulrich Drepper <drepper@...hat.com>,
Christoph Hellwig <hch@...radead.org>,
Dave Kleikamp <shaggy@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Amit K. Arora" <aarora@...ux.vnet.ibm.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-ext4@...r.kernel.org, suparna@...ibm.com, cmm@...ibm.com,
alex@...sterfs.com, suzuki@...ibm.com
Subject: Re: [RFC] Heads up on sys_fallocate()
On Monday 05 March 2007, Anton Altaparmakov wrote:
> An alternative would be to allocate blocks and then when the data is
> written perform the compression and free any blocks you do not need
> any more because the data has shrunk sufficiently. Depending on the
> implementation details this could potentially create horrible
> fragmentation as you would allocate a large consecutive region and
> then go and drop random blocks from that region thus making the file
> fragmented.
Unfortunately, this is not as easy on logfs, because there is no point
in allocating a block when there is no data to write into it. Fragmentation
on flash media is free, but you can never modify a block in place without
erasing it first. This means it will always be written to a new location
on the next write access.
One option that might work (similar to what you describe in your other mail)
is to have a per-inode count of reserved blocks, without allocating specific
blocks for them. The journal then needs to maintain the number of total
reserved blocks for all files and keep that in sync with blocks that were
reserved for specific inodes.
Arnd <><
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists