lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081217032517.GE10590@mit.edu>
Date:	Tue, 16 Dec 2008 22:25:17 -0500
From:	Theodore Tso <tytso@....edu>
To:	linux-kernel@...r.kernel.org
Subject: Re: Very slow header cache in mutt if the maildir is on ext3

On Wed, Dec 17, 2008 at 01:52:10AM +0100, Tino Keitel wrote:
> OK, after glancing at the strace output again, I see that the seek
> offsets are much more linear in the XFS case, whereas they are pretty
> random in the ext3 case.  I guess that this is connected to the order
> of the files in the maildir, which depends on the FS type.  So this is
> a bug in mutt which makes reading the header cache dead slow if the
> files are in an inconvenient order.

I *thought* mutt had a patch which sorted the files returned by
readdir() by inode number, and then opened the files sorted by inode
number order; maybe it was a distro-specific patch that was never
pushed back to mainline, though.  In any case, sorting list of
directory entries as returned by readdir() by inode number does solves
the problem for ext3 with htree, and in general is a good optimization
for most filesystems.  (See attached for a ld-preload hack that
demonstrates the optimization.)

There is also a fix in ext4 which partially addresses this problem,
which could be back-ported to ext3:

commit 240799cdf22bd789ea6852653c3b879d35ad0a6c
Author: Theodore Ts'o <tytso@....edu>
Date:   Thu Oct 9 23:53:47 2008 -0400

    ext4: Use readahead when reading an inode from the inode table
    
    With modern hard drives, reading 64k takes roughly the same time as
    reading a 4k block.  So request readahead for adjacent inode table
    blocks to reduce the time it takes when iterating over directories
    (especially when doing this in htree sort order) in a cold cache case.
    With this patch, the time it takes to run "git status" on a kernel
    tree after flushing the caches via "echo 3 > /proc/sys/vm/drop_caches"
    is reduced by 21%.
    
    Signed-off-by: "Theodore Ts'o" <tytso@....edu>

						- Ted


Download attachment "spd_readdir.tar.gz" of type "application/octet-stream" (3555 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ