lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 12 Sep 2008 15:19:27 +0200
From:	Bodo Eggert <7eggert@....de>
To:	Lennart Sorensen <lsorense@...lub.uwaterloo.ca>,
	Harun Scheutzow <harun04@...eutzow.de>,
	linux-kernel@...r.kernel.org
Subject: Re: vfat file system extreme fragmentation on multiprocessor

Lennart Sorensen <lsorense@...lub.uwaterloo.ca> wrote:
> On Thu, Sep 11, 2008 at 08:01:16PM +0200, Harun Scheutzow wrote:

>> I like to share the following observation made with different kernels of
>> 2.6.x series, a T7100 Core2Duo CPU (effectively 2 processors). I have not
>> seen such a post while searching.
>> 
>> Two applications compress data at the same time and try to do their best to
>> avoid fragmenting the file system by writing blocks of 50 MByte to a VFAT
>> (FAT32) partition on SATA harddisk, cluster size 8 KByte. Resulting file size
>> is 200 to 250 MByte. It is ok to get 4 to 5 fragments per file. But at
>> random, approximately at every 4th file, there are a few 100 up to more than
>> 4500 (most likely case approx 1500) fragments for each of the two files
>> written in parallel.
[...]

> I don't think fat filesystems have any concept of reserving space for
> expanding files.  It's a pretty simple filesystem after all designed for
> a single cpu machine with a non-multitasking OS (if you can call DOS an
> OS).

And it's not designed to allow deleting files if they are open. Ad yet you
manage to keep the allocation in memory after removing the directory entry.
Keeping a pre-allocation will require a slightly different mechanism, because
the pre-allocated space needs to be stolen sometimes.

> Space tends to be allocated from the start of the disk wherever 
> free space is found since otherwise you would have to go searching for
> the free space, which isn't that efficient.

That's how it used to be done in very ancient DOS versions. Current versions
allocate from the most recently allocated cluster (and cache this value in
the superblock). It would be perfectly legal to search from the directory's
on-disk location onward, too, if you want to emulate ext2 behavior. (You'd
pick a random cluster if you'd create a directory.)

[...]
> Now what would happen if you used ftruncate to extend the file you open
> to a large size, and then started writing it, and then set the size
> correctly at the end?

The fs would write zeroes instead of the data, doing the same bad allocation,
because FAT doesn't support holes.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ