lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 26 Oct 2007 10:58:14 -0700
From:	Mingming Cao <cmm@...ibm.com>
To:	Valerie Clement <valerie.clement@...l.net>
Cc:	ext4 development <linux-ext4@...r.kernel.org>,
	Alex Tomas <alex@...sterfs.com>,
	Andreas Dilger <adilger@...sterfs.com>
Subject: Re: problem with delayed allocation option

On Fri, 2007-10-26 at 14:28 +0200, Valerie Clement wrote:
> Hi all,
> 
Hi Valerie,

> I ran a small test which creates one directory and 2O 8-KB size files in it.
> 
> When the filesystem is mounted without the delalloc option, here is the
> output of the command dumpe2fs for the group in which the directory and 
> the files are created:
> 
> Group 532 : (Blocks 17432576-17465343)
>    Block bitmap at 17432576 (+0), Inode bitmap at 17432577 (+1)
>    Inode table at 17432578-17433089 (+2)
>    32213 free blocks, 16363 free inodes, 1 directories
>    Free blocks : 17433090-17459199, 17459241-17465343
>    Free inodes : 8716310-8732672
> 
> 
> When the filesystem is mounted with the delalloc option, the same test
> gives a different result:
> 
> Group 395 : (Blocks 12943360-12976127)
>    Block bitmap at 12943360 (+0), Inode bitmap at 12943361 (+1)
>    Inode table at 12943362-12943873 (+2)
>    32213 free blocks, 16363 free inodes, 1 directories
>    Free blocks : 12943874-12955647, 12955650-12955655, 
> 12955658-12955663, 12955666-12955671, 12955674-12955679, 
> 12955682-12955687, 12955690-12955695, 12955698-12955703, 
> 12955706-12955711, 12955714-12955719, 12955722-12955727, 
> 12955730-12955735, 12955738-12955743, 12955746-12955751, 
> 12955754-12955759, 12955762-12955767, 12955770-12955775, 
> 12955778-12955783, 12955786-12955791, 12955794-12955799, 
> 12955802-12961791, 12961793-12976127
>    Free inodes : 6471702-6488064
> 
> In the first case, the allocated blocks are contiguous whereas they are
> not in the second case.
> 
> After adding traces in the code to understand why the behavior is
> different with the delalloc option, I found that the problem is related
> to the inode reservation window.

> To simplify, without the delalloc option we have the following scenario:
> For each inode,
>   - call alloc_new_reservation() to allocate a new reservation window
>   - allocate blocks for data
>   - write data to disk
>   - ext4_discard_reservation() when the inode is closed.
> 
> With the delalloc option, when the data are written to disk we have:
> For each inode,
>   - call alloc_new_reservation() to allocate a new reservation window
>   - allocate blocks for data
>   - write data to disk
> 
> 
> I think a call to ext4_discard_reservation() is missing somewhere and
> the question is where.
> 
Oh, that should be block reservation, not inode reservation window.

The problem with delayed allocation and block reservation is, we don't
know when suppose to close the window, as the file maybe closed with
diry data in cache,and the blocks has not be allocated yet. We would
like to keep the window open so that later delayed allocation happens,
the allocation could take advantage of the reservation. But on the other
hand, that may leads fs external fragmentation.

with mballoc, ext3 block reservation should be turned off and replaced
with the group-in-core-preallocation. 

Has the new delayed allocation integrated with mballoc yet?

> I tried to add this call at the end of the ext4_da_get_block_write()
> function. This seems to fix the problem as the blocks are allocated
> contiguously on disk but the function seems to be called too many times
> so I think it is perhaps not the right place to call it.
> 
> Who could look into this problem?
> I've got a few days off so I couldn't help more next days, but the
> problem is easily reproductible.
> 
> Wouldn't this also explain why the compilebench results posted by Chris
> Mason are bad in some cases?
> 
>    Valérie
> 
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ