lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <51EC9A77.6090109@zytor.com>
Date:	Sun, 21 Jul 2013 19:35:35 -0700
From:	"H. Peter Anvin" <hpa@...or.com>
To:	"Theodore Ts'o" <tytso@....edu>
CC:	linux-ext4@...r.kernel.org
Subject: Re: e2fsck running extremely slowly

On 07/21/2013 06:29 PM, Theodore Ts'o wrote:
> On Sun, Jul 21, 2013 at 03:45:20PM -0700, H. Peter Anvin wrote:
>> I have a large filesystem (14 TB) which suffered a RAID failure which
>> seems to have corrupted some inodes.  Unfortunately as a result there
>> are now a number of inodes with "false extents" which result in a very
>> large number of multiply claimed blocks.
>>
>> I have tried to run e2fsck on this filesystem, and it gets as far as
>> phase 1D, at which point it starts running at a glacial pace.  After 48
>> hours -- most of it sitting at 100% CPU executing no system calls at all
>> -- it claims to have processed a single file out of almost 10000.
> 
> What I usually do when I is to look at the inodes that are corrupted
> in phases 1b, and examine them using debugfs.  If they look insane,
> nuke them using the debugfs clri command.
> 
> Yes, this is horribly manual.  The long term planned solution is that
> the metadata checksum feature will allow us to determine the metadata
> is corrupt, and then e2fsck will know which fs metadata it can trust,
> and which it will have to discard.
>

Manual isn't really practical with almost 10,000 reported inodes...

	-hpa


--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists