[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200703050136.37181.arnd@arndb.de>
Date: Mon, 5 Mar 2007 01:36:36 +0100
From: Arnd Bergmann <arnd@...db.de>
To: Jörn Engel <joern@...ybastard.org>
Cc: Ulrich Drepper <drepper@...hat.com>,
Anton Altaparmakov <aia21@....ac.uk>,
Christoph Hellwig <hch@...radead.org>,
Dave Kleikamp <shaggy@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Amit K. Arora" <aarora@...ux.vnet.ibm.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-ext4@...r.kernel.org, suparna@...ibm.com, cmm@...ibm.com,
alex@...sterfs.com, suzuki@...ibm.com
Subject: Re: [RFC] Heads up on sys_fallocate()
On Monday 05 March 2007, Jörn Engel wrote:
> That actually causes an interesting problem for compressing filesystems.
> The space consumed by blocks depends on their contents and how well it
> compresses. At the moment, the only option I see to support
> posix_fallocate for LogFS is to set an inode flag disabling compression,
> then allocate the blocks.
>
> But if the file already contains large amounts of compressed data, I
> have a problem. Disabling compression for a range within a file is not
> supported, so I can only return an error. But which one?
Using the current glibc implementation on a compressed file system ideally
should be a very expensive no-op because you won't actually allocate much
space for a file when writing zeroes to it. You also don't benefit of a
contiguous allocation in logfs, since flash has uniform seek times over
all the medium.
I'd suggest you implement posix_fallocate as an real nop and just return
success without doing anything. You could also return ENOSPC in case
the blocks requested by posix_fallocate don't fit on the medium without
compression, but that is more or less just guesswork (like statfs is).
Arnd <><
-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists