lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Sat, 2 Feb 2013 12:45:36 -0700
From:	Andreas Dilger <adilger@...ger.ca>
To:	Lukáš Czerner <lczerner@...hat.com>
Cc:	Radek Pazdera <rpazdera@...hat.com>,
	"Theodore Ts'o" <tytso@....edu>, linux-ext4@...r.kernel.org
Subject: Re: [RFC] Optimizing readdir()

On 2013-01-30, at 4:34 AM, Lukáš Czerner wrote:
> On Tue, 29 Jan 2013, Radek Pazdera wrote:
>> Radek Pazdera <rpazdera@...hat.com> wrote:
>> On Tue, Jan 15, 2013 at 03:44:57PM -0700, Andreas Dilger wrote:
>>> Having an upper limit on the directory cache is OK too.  Read all
>>> of the entries that fit into the cache size, sort them, and return
>>> them to the caller.  When the caller has processed all of those
>>> entries, read another batch, sort it, return this list, repeat.
>>> 
>>> As long as the list is piecewise ordered, I suspect it would gain
>>> most of the benefit of linear ordering (sequential inode table
>>> reads, avoiding repeated lookups of blocks).  Maybe worthwhile if
>>> you could test this out?
>> 
>> I did the tests last week. I modified the spd_readdir preload to
>> read at most $SPD_READDIR_CACHE_LIMIT entries, sort them and repeat.
>> The patch is here:
>> 
>>    http://www.stud.fit.vutbr.cz/~xpazde00/soubory/dir-index-test-ext4/
>> 
>> I tested it with the limit set to 0 (i.e., no limit), 1000, 10000,
>> 50000, and completely without the preload. The test runs were
>> performed on the same directory, so the results shouldn't be
>> affected by positioning on disk.
>> 
>> Directory sizes went from 10k to 1.5M. The tests were run twice.
>> The first run is only with metadata. In the second run, each file
>> has 4096B of data.
>> 
>> The times seem to decrease accordingly as the limit of the cache
>> increases. The differences are bigger in case of 4096B files, where
>> the data blocks start to evict the inode tables. However, copying is
>> still more than two times slower for 1.5M files when 50000 entries
>> are cached.

Still, caching 50k entries is twice as fast as caching none for
the 1.5M directory entries.  How much memory is that in total?
 Maybe 2.5MB, which isn't too bad at all for any kind of modern
system, 

>> It might be interesting to test what happens when the size of the
>> files in the directory increases.

Hopefully ext4 will move the large files into a different group.

> those are interesting results and it supports the idea that you can
> get most of the performance of completely sorted inode list by doing
> it in "batches" as long as the size of the batch is sufficiently
> large. However I do not think that using spd_readdir is the best
> approach for the problem, nor do I think that it should be part of
> the generic library. Aside from it's "hackish" nature and the fact
> you will never be able to tell how much memory you can actually use
> for the sorting, the fact is that other file systems can handle this
> problem well enough in comparison with ext4 and we should really
> focus on fixing it, rather than going around it.

I would argue that even if some on-disk optimization is found, it
will not help the majority of users that do not have their files
laid out with the new format.  Also, the spd-readdir can help all
filesystems, not just ext4.  It will help ext2, ext3, ext4, isofs,
etc.  It could statfs() and detect the filesystem type, and skip
XFS and Btrfs if this proves to not help for them...

I'm not against fixing this in the filesystem as well, but I think
it will be several years before the majority of users see it in a
kernel they are using.

Cheers, Andreas--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists