lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 3 Dec 2015 10:58:41 -0700
From:	Andreas Dilger <adilger@...ger.ca>
To:	lokesh jaliminche <lokesh.jaliminche@...il.com>
Cc:	"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>
Subject: Re: Regarding random grouop search start for allocation of inode.

On Dec 3, 2015, at 01:07, lokesh jaliminche <lokesh.jaliminche@...il.com> wrote:
> 
> Thought of giving more clarification on my question
> why group search start is random ? because we can also start  search
> for valid groups for inode allocation from the start. As this group
> search is random  inode selection might go to end of groups which
> might affect IO performance

Starting the inode search at the beginning of the disk each time
means that inode allocation will be inefficient because it will search
over groups that are mostly or entirely full already.

Allocating the new directory in a semi-random group, one that is
relatively unused, ensures that new
inode and block allocations are relatively efficient afterward. 

Cheers, Andreas

> On Thu, Dec 3, 2015 at 1:14 PM, lokesh jaliminche
> <lokesh.jaliminche@...il.com> wrote:
>> hello folks,
>>                I am new to ext4 code. I was going through the
>> ext4-source for allocation of inode.
>> There is one thing that I did not understand while selection of groups
>> for inode allocation . I came across this code snippet which is part
>> of find_group_orlov function. question is, why group search start is
>> random ?
>> 
>> Code snippet:
>> ==========
>> В·В·В·if (qstr) {
>> »·······»·······»·······hinfo.hash_version = LDISKFS_DX_HASH_HALF_MD4;
>> »·······»·······»·······hinfo.seed = sbi->s_hash_seed;
>> »·······»·······»·······ldiskfsfs_dirhash(qstr->name, qstr->len, &hinfo);
>> »·······»·······»·······grp = hinfo.hash;
>> »·······»·······} else
>> »·······»·······»·······get_random_bytes(&grp, sizeof(grp));
>> »·······»·······parent_group = (unsigned)grp % ngroups;
>> »·······»·······for (i = 0; i < ngroups; i++) {
>> »·······»·······»·······g = (parent_group + i) % ngroups;
>> »·······»·······»·······get_orlov_stats(sb, g, flex_size, &stats);
>> »·······»·······»·······if (!stats.free_inodes)
>> »·······»·······»·······»·······continue;
>> »·······»·······»·······if (stats.used_dirs >= best_ndir)
>> »·······»·······»·······»·······continue;
>> »·······»·······»·······if (stats.free_inodes < avefreei)
>> »·······»·······»·······»·······continue;
>> »·······»·······»·······if (stats.free_blocks < avefreeb)
>> »·······»·······»·······»·······continue;
>> »·······»·······»·······grp = g;
>> »·······»·······»·······ret = 0;
>> »·······»·······»·······best_ndir = stats.used_dirs;
>> »·······»·······}
>> 
>> Thanks & Regards,
>>  Lokesh
> N‹§Іжмrё›yъиљШbІX¬¶З§vШ^–)Ює{.nЗ+‰·ҐЉ{±{.xЉ{ayє.К‡Ъ™л,j.­ўfЈў·hљ‹аz№.®wҐўё.ў·¦j:+v‰ЁЉwиjШm¶џяѕ.«‘кзzZ+ѓщљЋЉЭўj"ќъ!¶i

Powered by blists - more mailing lists