lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250602210514.7acd5325@Zen-II-x12.niklas.com>
Date: Mon, 2 Jun 2025 21:05:14 -0400
From: David Niklas <simd@...mail.net>
To: Linux RAID <linux-raid@...r.kernel.org>
Cc: linux-kernel@...r.kernel.org
Subject: Need help increasing raid scan efficiency.

Hello,
My PC suffered a rather nasty case of HW failure recently where the MB
would break the CPU and RAM. I ended up with different data on different
members of my RAID6 array.

I wanted to scan through the drives and take some checksums of various
files in an attempt to ascertain which drives took the most data
corruption damage, to try and find the date that the damage started
occurring (as it was unclear when exactly this began), and to try and
rescue some of the data off of the good pairs.

So I setup the array into read-only mode and started the array with only
two of the drives. Drives 0 and 1. Then I proceeded to try and start a
second pair, drives 2 and 3, so that I could scan them simultaneously.
With the intent of then switching it over to 0 and 2 and 1 and 3, then 0
and 3 and 1 and 2.


This failed with the error message:
# mdadm --assemble -o --run /dev/md128 /dev/sdc /dev/sdd
mdadm: Found some drive for array that is already active: /dev/md127
mdadm: giving up.
# mdadm --detail /dev/md127
           Version : 1.2
     Creation Time : XXX
        Raid Level : raid6
        Array Size : XXX
     Used Dev Size : XXX
      Raid Devices : 4
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : XXX
             State : clean, degraded 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : XXX
              UUID : XXX
            Events : 3826931

    Number   Major   Minor   RaidDevice State
       7       9        0        0      active sync   /dev/md0
       -       0        0        1      removed
       -       0        0        2      removed
       6       9        1        3      active sync   /dev/md1



Any ideas as to how I can get mdadm to run the array as I requested
above? I did try --force, but mdadm refused to listen.

Thanks,
David

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ