lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Sun, 01 Apr 2007 11:23:08 +0000
From:	"Florian D." <flockmock@....at>
To:	Neil Brown <neilb@...e.de>
CC:	linux-kernel@...r.kernel.org
Subject: Re: cannot add device to partitioned raid6 array

Neil Brown wrote:
> Definitely the cause.  If you really need to add this array, you may
> be able to reduce the usage of the array, then reduce the size of the
> array, then add the drive.
> Depending on how you have partitioned the array, and how you are using
> the partitions, you may just need to reduce the filesystem in the last
> partition, then use *fdisk to resize the partition.  Then something
> like:
> 
>    mdadm --grow --size=243000000 /dev/md_d4
> 
> Is is generally safer to reduce the filesystem too much, resize the
> device, then grow the filesystem up to the size of the device.  That
> way avoids fiddly arithmetic and so reduces the chance of failure.
> 
> NeilBrown
> 

thanks, but I decided to begin from scratch(backup is available ;)
now, all partitions have the same size. creating a raid6 array from 2 drives and hot-adding another
one works now. so this could be regarded as solved.

But when I try to create the array with 3 drives at once, the following strange error appears:

flockmock ~ # mdadm --create /dev/md_d4 --level=6 -a mdp --chunk=32 -n 4 /dev/sda2 /dev/sdb2 
/dev/sdc2 missing
mdadm: RUN_ARRAY failed: Input/output error
mdadm: stopped /dev/md_d4

dmesg shows:
[  484.362525] md: bind<sda2>
[  484.363429] md: bind<sdb2>
[  484.364337] md: bind<sdc2>
[  484.364397] md: md_d4: raid array is not clean -- starting background reconstruction
[  484.365876] raid5: device sdc2 operational as raid disk 2
[  484.365879] raid5: device sdb2 operational as raid disk 1
[  484.365881] raid5: device sda2 operational as raid disk 0
[  484.365884] raid5: cannot start dirty degraded array for md_d4
[  484.365886] RAID5 conf printout:
[  484.365887]  --- rd:4 wd:3
[  484.365889]  disk 0, o:1, dev:sda2
[  484.365891]  disk 1, o:1, dev:sdb2
[  484.365893]  disk 2, o:1, dev:sdc2
[  484.365895] raid5: failed to run raid set md_d4
[  484.365897] md: pers->run() failed ...
[  484.366271] md: md_d4 stopped.
[  484.366303] md: unbind<sdc2>
[  484.366309] md: export_rdev(sdc2)
[  484.366314] md: unbind<sdb2>
[  484.366318] md: export_rdev(sdb2)
[  484.366321] md: unbind<sda2>
[  484.366325] md: export_rdev(sda2)


I just wanted to report that FYI, I will take the first route and wait a little...
cheers, florian
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ