lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 23 Mar 2017 13:37:21 -0400
From:   lsorense@...lub.uwaterloo.ca (Lennart Sorensen)
To:     raid@...ller.org
Cc:     linux-kernel@...r.kernel.org
Subject: Re: RAID array is gone, please help

On Thu, Mar 23, 2017 at 05:49:05PM +0100, raid@...ller.org wrote:
> I am hoping someone here will help me. Was reading this site...
> 
> https://raid.wiki.kernel.org/index.php/Linux_Raid
> 
> and it said to email this list if you've tried everything other than mdadm
> --create.
> 
> 
> I am running Ubuntu 16.04. Machine name is fred. I used webmin to create a 4
> disk RAID10 array yesterday. I moved all my data onto the array.
> 
> Today, I had to reboot my PC. The resync was still not done, but I read
> online that it's OK to boot during resync. After boot, my array was gone. I
> checked syslog, and it just has this line:
> 
> DeviceDisappeared event detected on md device /dev/md0
> 
> I did not partition my disks before building the array. So I believe the
> array consisted of /dev/sdc, /dev/sdd, /dev/sde, and /dev/sdf.
> 
> Here's some info...
> 
> stephen@...d> lsblk
> NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
> sda      8:0    0 117.4G  0 disk
> ├─sda1   8:1    0 109.7G  0 part /
> ├─sda2   8:2    0     1K  0 part
> └─sda5   8:5    0   7.7G  0 part [SWAP]
> sdb      8:16   0 465.8G  0 disk
> └─sdb1   8:17   0 465.8G  0 part
> sdc      8:32   0   3.7T  0 disk
> sdd      8:48   0   3.7T  0 disk
> sde      8:64   0   3.7T  0 disk
> sdf      8:80   0   3.7T  0 disk
> 
> stephen@...d> sudo mdadm --examine /dev/sdc
> [sudo] password for stephen:
> /dev/sdc:
>    MBR Magic : aa55
> Partition[0] :   4294967295 sectors at            1 (type ee)
> stephen@...d>
> stephen@...d> sudo mdadm --examine /dev/sdc1
> mdadm: cannot open /dev/sdc1: No such file or directory
> stephen@...d>
> stephen@...d> sudo mdadm --examine /dev/sdd
> /dev/sdd:
>    MBR Magic : aa55
> Partition[0] :   4294967295 sectors at            1 (type ee)
> stephen@...d>
> stephen@...d> sudo mdadm --examine /dev/sde
> /dev/sde:
>    MBR Magic : aa55
> Partition[0] :   4294967295 sectors at            1 (type ee)
> stephen@...d>
> stephen@...d> sudo mdadm --examine /dev/sdf
> /dev/sdf:
>    MBR Magic : aa55
> Partition[0] :   4294967295 sectors at            1 (type ee)
> 
> stephen@...d> sudo mdadm --assemble --force /dev/md0 /dev/sdc /dev/sdd
> /dev/sde /dev/sdf
> mdadm: Cannot assemble mbr metadata on /dev/sdc
> mdadm: /dev/sdc has no superblock - assembly aborted
> 
> Thank you for any help you can provide.

Did your disks have partitions previously?  That output looks a lot like
the protective MBR partition table for a disk with GPT partitions.

Could that still existing in sector 0 be confusing mdadm?

I have never personally done any md raid without partitions.  To me they
just make more sense.

One way to test could be to save a copy of sector 0, then overwrite sector
0 with zeros and then run mdadm --examine again to see if that makes a
difference.  You can always put back the saved copy of sector 0 that way.

My understanding is that the default is to put the raid superblock at
offset 4k, so it would not overwrite an existing MBR partition table.
If it also happens due to rounding that the end of the disk isn't
overwritten (or even just because that part of the filesystem wasn't
written to yet), then the backup GPT from before would still be intact,
and could perhaps cause even more confussion later if gdisk or similar
is pointed at the disk.  Really want to be sure there is no trace left
of the partition table before using it raw for md raid.

Any chance the system saved an mdadm.conf file of your setup?

-- 
Len Sorensen

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ