lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 01 Nov 2011 09:10:54 +0800
From:	Coly Li <i@...y.li>
To:	Ted Ts'o <tytso@....edu>
CC:	Andreas Dilger <adilger@...ger.ca>,
	Andreas Dilger <adilger@...mcloud.com>,
	linux-ext4 development <linux-ext4@...r.kernel.org>,
	Alex Zhuravlev <bzzz@...mcloud.com>, Tao Ma <tm@....ma>,
	"hao.bigrat@...il.com" <hao.bigrat@...il.com>
Subject: Re: bigalloc and max file size

On 2011年11月01日 03:38, Ted Ts'o Wrote:
> On Tue, Nov 01, 2011 at 01:39:34AM +0800, Coly Li wrote:
>> In some application, we allocate a big file which occupies most space of a file system, while the file system built on
>> (expensive) SSD. In such configuration, we want less blocks allocated for inode table and bitmap. If the max extent
>> length could be much big, there is chance to have much less block groups, which results more blocks for regular file.
>> Current bigalloc code does well already, but there is still chance to do better. The sys-admin team believe
>> cluster-based-extent can help Ext4 to consume as less meta data memory as raw disk does, and gain as more available data
>> blocks as raw disks does, too. This is a small number on one single SSD, but in our cluster environment, this effort can
>> help to save a recognized amount of capex.
> 
> OK, but you're not running into the 16TB file size limitation, are
> you?  
No, we are not in this problem.

> That would be a lot of SSD's.  
Yes, IMHO that's a lot of SSDs.

>I assume the issue then is you
> want to minimize the number of extents, limited by the 15-bit extent
> length field?
Not only extents, but also minimize inode table blocks, bitmap blocks.

> 
> What cluster size are you thinking about?
Currently we test 1MB cluster size. The extreme ideal configuration (of one use case) is, there is only one block group
on the whole file system. (In this use case) we are willing to try biggest possible cluster size if we are able to.

And how do you plan to
> initialize it?  Via fallocate, or by explicitly writing zeros to the
> whole file (so all of the blocks are marked as initialzied?  Is it
> going to be sparse file?
In the above application I mentioned, the file is allocated by fallocate(2). Writing to the file is appending write with
8MB length, when writing reaches file end, it goes back to the beginning of the file and continue to write.

Thanks.
-- 
Coly Li
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ