[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ebcb2e10-2528-6c5e-cdd4-ceeaeedb0ae6@gmail.com>
Date: Thu, 29 Jul 2021 20:32:01 +0200
From: Mikhail Morfikov <mmorfikov@...il.com>
To: Theodore Ts'o <tytso@....edu>
Cc: linux-ext4@...r.kernel.org
Subject: Re: Is it safe to use the bigalloc feature in the case of ext4
filesystem?
On 29/07/2021 19.59, Theodore Ts'o wrote:
> On Wed, Jul 28, 2021 at 11:36:27AM +0200, Mikhail Morfikov wrote:
>> Thanks for the answer.
>>
>> I have one question. Basically there's the /etc/mke2fs.conf file and
>> I've created the following stanza in it:
>>
>> bigdata = {
>> errors = remount-ro
>> features = has_journal,extent,huge_file,flex_bg,metadata_csum,64bit,dir_nlink,extra_isize,bigalloc,^uninit_bg,sparse_super2
>> inode_size = 256
>> inode_ratio = 4194304
>> cluster_size = 4M
>> reserved_ratio = 0
>> lazy_itable_init = 0
>> lazy_journal_init = 0
>> }
>>
>> It looks like the cluster_size parameter is ignored in such case (I've
>> tried both 4M and 4194304 values), and the filesystem was created with
>> 64K cluster size (via mkfs -t bigdata -L bigdata /dev/sdb1 ), which is
>> the default when the bigalloc feature is set.
>
> It does work, but you need to use an integer value for cluster_size,
> and it needs to be in the [fs_types[ section. So something like what I
> have attached below.
>
> And then try using the command "mke2fs -t ext4 -T bigdata -L bigdata
> /dev/sdb1".
Yes, this helped and the cluster size was set to 4194304 as it should.
>
> If you see the hugefile and hugefiles stanzas below, that's an example
> of one way bigalloc has gotten a fair amount of use. In this use case
> mke2fs has pre-allocated the huge data files guaranteeing that they
> will be 100% contiguous. We're using a 32k cluster becuase there are
> some metadata files where better allocation efficiencies is desired.
I'll try them both and see whether I could use either one of them on
my drive.
>
> Cheers,
>
> - Ted
>
> [defaults]
> base_features = sparse_super,large_file,filetype,resize_inode,dir_index,ext_attr
> default_mntopts = acl,user_xattr
> enable_periodic_fsck = 0
> blocksize = 4096
> inode_size = 256
> inode_ratio = 16384
> undo_dir = /var/lib/e2fsprogs/undo
>
> [fs_types]
> ext3 = {
> features = has_journal
> }
> ext4 = {
> features = has_journal,extent,huge_file,flex_bg,metadata_csum,64bit,dir_nlink,extra_isize
> inode_size = 256
> }
> small = {
> blocksize = 1024
> inode_size = 128
> inode_ratio = 4096
> }
> floppy = {
> blocksize = 1024
> inode_size = 128
> inode_ratio = 8192
> }
> big = {
> inode_ratio = 32768
> }
> huge = {
> inode_ratio = 65536
> }
> news = {
> inode_ratio = 4096
> }
> largefile = {
> inode_ratio = 1048576
> blocksize = -1
> }
> largefile4 = {
> inode_ratio = 4194304
> blocksize = -1
> }
> hurd = {
> blocksize = 4096
> inode_size = 128
> }
> hugefiles = {
> features = extent,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize,^resize_inode,sparse_super2
> hash_alg = half_md4
> reserved_ratio = 0.0
> num_backup_sb = 0
> packed_meta_blocks = 1
> make_hugefiles = 1
> inode_ratio = 4194304
> hugefiles_dir = /storage
> hugefiles_name = chunk-
> hugefiles_digits = 5
> hugefiles_size = 4G
> hugefiles_align = 256M
> hugefiles_align_disk = true
> zero_hugefiles = false
> flex_bg_size = 262144
> }
>
> hugefile = {
> features = extent,huge_file,bigalloc,flex_bg,uninit_bg,dir_nlink,extra_isize,^resize_inode,sparse_super2
> cluster_size = 32768
> hash_alg = half_md4
> reserved_ratio = 0.0
> num_backup_sb = 0
> packed_meta_blocks = 1
> make_hugefiles = 1
> inode_ratio = 4194304
> hugefiles_dir = /storage
> hugefiles_name = huge-file
> hugefiles_digits = 0
> hugefiles_size = 0
> hugefiles_align = 256M
> hugefiles_align_disk = true
> num_hugefiles = 1
> zero_hugefiles = false
> }
> bigdata = {
> errors = remount-ro
> features = has_journal,extent,huge_file,flex_bg,metadata_csum,64bit,dir_nlink,extra_isize,bigalloc,^uninit_bg,sparse_super2
> inode_size = 256
> inode_ratio = 4194304
> cluster_size = 4194304
> reserved_ratio = 0
> lazy_itable_init = 0
> lazy_journal_init = 0
> }
>
Powered by blists - more mailing lists