lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 23 Mar 2017 19:09:41 +0100
From:   raid@...ller.org
To:     Lennart Sorensen <lsorense@...lub.uwaterloo.ca>
Cc:     linux-kernel@...r.kernel.org
Subject: Re: RAID array is gone, please help

Thank you very much or your reply.

I naively thought that starting without partitions would be the best 
starting point, given 3 of the disks had been in a RAID5 array
previously (possibly with partitions, not sure), but that looks like
a bad choice, based on some other things I've googled. Lesson learned.

I have an mdadm.conf file, but it could be a remnant of my previous 
array. I've already edited it trying to get things to work, so I'm
not sure if it was updated when I created the new array or not.

I see various people online have had success in my situation using
madadm --create /dev/md0 --assume-clean --verbose --level=10 \
--raid-devices=4 /dev/sdc /dev/sdd /dev/sde /dev/sdf

Some people used --assume-clean, and some didn't. Given my array wasn't 
done with its resync, maybe I should leave that out.

If that would work, I guess then I need to get the data off the array,
delete it, and recreate it with disk partitions, or risk this happening
again at the next reboot, for whatever reason.

Anyone think it's a bad idea to try mdadm --create at this point?

Sorry, I'm not sure how to write 0's to sector 0...

Thank you.




On 3/23/2017 18:37, Lennart Sorensen wrote:
> On Thu, Mar 23, 2017 at 05:49:05PM +0100, raid@...ller.org wrote:
>> I am hoping someone here will help me. Was reading this site...
>>
>> https://raid.wiki.kernel.org/index.php/Linux_Raid
>>
>> and it said to email this list if you've tried everything other than mdadm
>> --create.
>>
>>
>> I am running Ubuntu 16.04. Machine name is fred. I used webmin to create a 4
>> disk RAID10 array yesterday. I moved all my data onto the array.
>>
>> Today, I had to reboot my PC. The resync was still not done, but I read
>> online that it's OK to boot during resync. After boot, my array was gone. I
>> checked syslog, and it just has this line:
>>
>> DeviceDisappeared event detected on md device /dev/md0
>>
>> I did not partition my disks before building the array. So I believe the
>> array consisted of /dev/sdc, /dev/sdd, /dev/sde, and /dev/sdf.
>>
>> Here's some info...
>>
>> stephen@...d> lsblk
>> NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>> sda      8:0    0 117.4G  0 disk
>> ├─sda1   8:1    0 109.7G  0 part /
>> ├─sda2   8:2    0     1K  0 part
>> └─sda5   8:5    0   7.7G  0 part [SWAP]
>> sdb      8:16   0 465.8G  0 disk
>> └─sdb1   8:17   0 465.8G  0 part
>> sdc      8:32   0   3.7T  0 disk
>> sdd      8:48   0   3.7T  0 disk
>> sde      8:64   0   3.7T  0 disk
>> sdf      8:80   0   3.7T  0 disk
>>
>> stephen@...d> sudo mdadm --examine /dev/sdc
>> [sudo] password for stephen:
>> /dev/sdc:
>>    MBR Magic : aa55
>> Partition[0] :   4294967295 sectors at            1 (type ee)
>> stephen@...d>
>> stephen@...d> sudo mdadm --examine /dev/sdc1
>> mdadm: cannot open /dev/sdc1: No such file or directory
>> stephen@...d>
>> stephen@...d> sudo mdadm --examine /dev/sdd
>> /dev/sdd:
>>    MBR Magic : aa55
>> Partition[0] :   4294967295 sectors at            1 (type ee)
>> stephen@...d>
>> stephen@...d> sudo mdadm --examine /dev/sde
>> /dev/sde:
>>    MBR Magic : aa55
>> Partition[0] :   4294967295 sectors at            1 (type ee)
>> stephen@...d>
>> stephen@...d> sudo mdadm --examine /dev/sdf
>> /dev/sdf:
>>    MBR Magic : aa55
>> Partition[0] :   4294967295 sectors at            1 (type ee)
>>
>> stephen@...d> sudo mdadm --assemble --force /dev/md0 /dev/sdc /dev/sdd
>> /dev/sde /dev/sdf
>> mdadm: Cannot assemble mbr metadata on /dev/sdc
>> mdadm: /dev/sdc has no superblock - assembly aborted
>>
>> Thank you for any help you can provide.
>
> Did your disks have partitions previously?  That output looks a lot like
> the protective MBR partition table for a disk with GPT partitions.
>
> Could that still existing in sector 0 be confusing mdadm?
>
> I have never personally done any md raid without partitions.  To me they
> just make more sense.
>
> One way to test could be to save a copy of sector 0, then overwrite sector
> 0 with zeros and then run mdadm --examine again to see if that makes a
> difference.  You can always put back the saved copy of sector 0 that way.
>
> My understanding is that the default is to put the raid superblock at
> offset 4k, so it would not overwrite an existing MBR partition table.
> If it also happens due to rounding that the end of the disk isn't
> overwritten (or even just because that part of the filesystem wasn't
> written to yet), then the backup GPT from before would still be intact,
> and could perhaps cause even more confussion later if gdisk or similar
> is pointed at the disk.  Really want to be sure there is no trace left
> of the partition table before using it raw for md raid.
>
> Any chance the system saved an mdadm.conf file of your setup?
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ