lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 5 Mar 2007 00:32:14 +0000
From:	Anton Altaparmakov <aia21@....ac.uk>
To:	Jörn Engel <joern@...ybastard.org>
Cc:	Ulrich Drepper <drepper@...hat.com>, Arnd Bergmann <arnd@...db.de>,
	Christoph Hellwig <hch@...radead.org>,
	Dave Kleikamp <shaggy@...ux.vnet.ibm.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"Amit K. Arora" <aarora@...ux.vnet.ibm.com>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-ext4@...r.kernel.org, suparna@...ibm.com, cmm@...ibm.com,
	alex@...sterfs.com, suzuki@...ibm.com
Subject: Re: [RFC] Heads up on sys_fallocate()


On 5 Mar 2007, at 00:16, Jörn Engel wrote:

> On Sun, 4 March 2007 14:38:13 -0800, Ulrich Drepper wrote:
>>
>> When you do it like this, who can the kernel/filesystem  
>> *guarantee* that
>> when the data is written there actually is room on the harddrive?
>>
>> What you described seems like using truncate/ftruncate to increase  
>> the
>> file's size.  That is not at all what posix_fallocate is for.
>> posix_fallocate must make sure that the requested blocks on the  
>> disk are
>> reserved (allocated) for the file's use and that at no point in the
>> future will, say, a msync() fail because a mmap(MAP_SHARED) page has
>> been written to.
>
> That actually causes an interesting problem for compressing  
> filesystems.
> The space consumed by blocks depends on their contents and how well it
> compresses.  At the moment, the only option I see to support
> posix_fallocate for LogFS is to set an inode flag disabling  
> compression,
> then allocate the blocks.
>
> But if the file already contains large amounts of compressed data, I
> have a problem.  Disabling compression for a range within a file is  
> not
> supported, so I can only return an error.  But which one?

I don't know how your compression algorithm works but at least on  
NTFS that bit is easy: you allocate the blocks and mark them as  
allocated then the compression engine will write non-compressed data  
to those blocks.  Basically it works like this "does compression  
block X have any sparse blocks?". If the answer is "yes" the block is  
treated as compressed data and if the answer is "no" the block is  
treated as uncompressed data.  This means that if the data cannot be  
compressed (and in some cases if the data compressed is bigger than  
the data uncompressed) the data is stored non-compressed.  That is  
the most space efficient method to do things.

An alternative would be to allocate blocks and then when the data is  
written perform the compression and free any blocks you do not need  
any more because the data has shrunk sufficiently.  Depending on the  
implementation details this could potentially create horrible  
fragmentation as you would allocate a large consecutive region and  
then go and drop random blocks from that region thus making the file  
fragmented.

Best regards,

	Anton
-- 
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/


-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ