lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <D7ACB97A-5960-4D68-868A-7547B36160C4@dilger.ca>
Date:	Wed, 4 Feb 2015 14:32:35 -0700
From:	Andreas Dilger <adilger@...ger.ca>
To:	Olaf Hering <olaf@...fle.de>
Cc:	linux-ext4@...r.kernel.org
Subject: Re: ext3_dx_add_entry complains about Directory index full

On Feb 4, 2015, at 6:52 AM, Olaf Hering <olaf@...fle.de> wrote:
> On Wed, Feb 04, Andreas Dilger wrote:
> 
>> How many files/subdirs in this directory?  The old ext3 limit was 32000
>> subdirs, which the dir_index fixed, but the new limit is 65000 subdirs
>> without "dir_index" enabled.
> 
> See below:
> 
>>> # for t in d f l ; do echo "type $t: `find /media/BACKUP_OLH_500G/ -xdev -type $t | wc -l`" ; done
>>> type d: 1051396
>>> type f: 20824894
>>> type l: 6876

Is "BACKUP_OLH_500G" a single large directory with 1M directories and
20M files in it?  In that case, you are hitting the limits for the
current ext4 directory size with 20M+ entries.

Otherwise, I would expect you have subdirectories, and the link/count
limits are per directory so getting these numbers for the affected
directory are what is important.

Running something like http://www.pdsi-scidac.org/fsstats/ can give
you a good idea of what the file/directory size/age/counts min/max/avg
distributions are like for your filesystem.

Finding the largest directories with something like:

    find /media/BACKUP_OLH_500G -type d -size +10M -ls

would tell us how big your directories actually are.  The fsstats data
will also tell you what the min/max/avg filename length is, which may
also be a factor.

It would be surprising that you have such a large directory in a single
backup.  We typically test up to 10M files in a single directory.

> root@...ux-fceg:~ # time env -i /sbin/e2fsck -fDvv /dev/mapper/luks-861f1f73-7037-486a-9a8a-8588367fcf33
> e2fsck 1.42.12 (29-Aug-2014)
>      859307 regular files
>     1026949 directories
>    19504583 links

This implies that you have only 1.8M in-use files, while the above
reports 20M filenames, almost all of them hard links (about 23 links
per file).  That said, the error being reported is on the name insert
and not on the link counts, so either there are some directories with
huge numbers of files or the file names are so long that it causes
the directory leaves to fill up very quickly. 

> Block size:               1024

AH! This is the root of your problem.  Formatting with 1024-byte
blocks means that the two-level directory hash tree can only hold
about 128^2 * (1024 / filename_length * 3 / 4) entries, maybe 500k
entries or less if the names are long.

This wouldn't be the default for a 500GB filesystem, but maybe you
picked that to optimize space usage of small files a bit?  Definitely
1KB blocksize is not optimal for performance, and 4KB is much better.

Unfortunately, you need to reformat to get to 4kB blocks.

Cheers, Andreas





--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ