lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47305C1D.5070500@systella.fr>
Date:	Tue, 06 Nov 2007 13:20:45 +0100
From:	BERTRAND Joël <joel.bertrand@...tella.fr>
To:	Justin Piszcz <jpiszcz@...idpixels.com>
CC:	Dan Williams <dan.j.williams@...el.com>,
	Neil Brown <neilb@...e.de>, linux-kernel@...r.kernel.org,
	linux-raid@...r.kernel.org
Subject: Re: 2.6.23.1: mdadm/raid5 hung/d-state

Justin Piszcz wrote:
> 
> 
> On Tue, 6 Nov 2007, BERTRAND Joël wrote:
> 
>> Justin Piszcz wrote:
>>>
>>>
>>> On Tue, 6 Nov 2007, BERTRAND Joël wrote:
>>>
>>>>     Done. Here is obtained ouput :
>>>>
>>>> [ 1265.899068] check 4: state 0x6 toread 0000000000000000 read 
>>>> 0000000000000000 write fffff800fdd4e360 written 0000000000000000
>>>> [ 1265.941328] check 3: state 0x1 toread 0000000000000000 read 
>>>> 0000000000000000 write 0000000000000000 written 0000000000000000
>>>> [ 1265.972129] check 2: state 0x1 toread 0000000000000000 read 
>>>> 0000000000000000 write 0000000000000000 written 0000000000000000
>>>>
>>>>
>>>>     For information, after crash, I have :
>>>>
>>>> Root poulenc:[/sys/block] > cat /proc/mdstat
>>>> Personalities : [raid1] [raid6] [raid5] [raid4]
>>>> md_d0 : active raid5 sdc1[0] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdd1[1]
>>>>      1464725760 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
>>>>
>>>>     Regards,
>>>>
>>>>     JKB
>>>
>>> After the crash it is not 'resyncing' ?
>>
>>     No, it isn't...
>>
>>     JKB
>>
> 
> After any crash/unclean shutdown the RAID should resync, if it doesn't, 
> that's not good, I'd suggest running a raid check.
> 
> The 'repair' is supposed to clean it, in some cases (md0=swap) it gets 
> dirty again.
> 
> Tue May  8 09:19:54 EDT 2007: Executing RAID health check for /dev/md0...
> Tue May  8 09:19:55 EDT 2007: Executing RAID health check for /dev/md1...
> Tue May  8 09:19:56 EDT 2007: Executing RAID health check for /dev/md2...
> Tue May  8 09:19:57 EDT 2007: Executing RAID health check for /dev/md3...
> Tue May  8 10:09:58 EDT 2007: cat /sys/block/md0/md/mismatch_cnt
> Tue May  8 10:09:58 EDT 2007: 2176
> Tue May  8 10:09:58 EDT 2007: cat /sys/block/md1/md/mismatch_cnt
> Tue May  8 10:09:58 EDT 2007: 0
> Tue May  8 10:09:58 EDT 2007: cat /sys/block/md2/md/mismatch_cnt
> Tue May  8 10:09:58 EDT 2007: 0
> Tue May  8 10:09:58 EDT 2007: cat /sys/block/md3/md/mismatch_cnt
> Tue May  8 10:09:58 EDT 2007: 0
> Tue May  8 10:09:58 EDT 2007: The meta-device /dev/md0 has 2176 
> mismatched sectors.
> Tue May  8 10:09:58 EDT 2007: Executing repair on /dev/md0
> Tue May  8 10:09:59 EDT 2007: The meta-device /dev/md1 has no mismatched 
> sectors.
> Tue May  8 10:10:00 EDT 2007: The meta-device /dev/md2 has no mismatched 
> sectors.
> Tue May  8 10:10:01 EDT 2007: The meta-device /dev/md3 has no mismatched 
> sectors.
> Tue May  8 10:20:02 EDT 2007: All devices are clean...
> Tue May  8 10:20:02 EDT 2007: cat /sys/block/md0/md/mismatch_cnt
> Tue May  8 10:20:02 EDT 2007: 2176
> Tue May  8 10:20:02 EDT 2007: cat /sys/block/md1/md/mismatch_cnt
> Tue May  8 10:20:02 EDT 2007: 0
> Tue May  8 10:20:02 EDT 2007: cat /sys/block/md2/md/mismatch_cnt
> Tue May  8 10:20:02 EDT 2007: 0
> Tue May  8 10:20:02 EDT 2007: cat /sys/block/md3/md/mismatch_cnt
> Tue May  8 10:20:02 EDT 2007: 0

	I cannot repair this raid volume. I cannot reboot server without 
sending stop+A. init 6 stops at "INIT:". After reboot, md0 is 
resynchronized.

	Regards,

	JKB
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ