lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47ed01bd0812180508t89d1aecr13d2b9e8402ed885@mail.gmail.com>
Date:	Thu, 18 Dec 2008 08:08:58 -0500
From:	"Dylan Taft" <d13f00l@...il.com>
To:	"Nick Andrew" <nick@...k-andrew.net>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: MDADM Software Raid Woes

I may give that a shot.

But it also looks like my major numbers are wrong on the actual device nodes?
brw-r-----  1 root disk   9, 0 Dec 17 19:13 0
brw-r-----  1 root disk 259, 0 Dec 17 19:13 1
brw-r-----  1 root disk 259, 1 Dec 17 19:13 2
brw-r-----  1 root disk 259, 2 Dec 17 19:13 3

Shouldn't this be 9,something, not 259?

If I change the partition type from fd to something else, and don't
allow the kernel to auto assemble, then assemble manually via mdadm,
things work right.
Any idea what could cause thee major numbers to be wrong?

On Thu, Dec 18, 2008 at 1:56 AM, Nick Andrew <nick@...k-andrew.net> wrote:
> On Wed, Dec 17, 2008 at 11:33:31PM -0500, Dylan Taft wrote:
>> I created the raids with the mdadm tool, originally configured manually
>> /dev/md0 for sdb1/sdc1 raid1 to be mounted as /boot
>> /dev/md1 for sdb2/sdc2 raid0
>>
>> I created 3 partitions on md1, one for future /, one for swap, one for
>> future /home.
>> So I had /dev/md0, md1, and /dev/md1p1,/dev/md1p2/dev/md1p3
>
> Why not use LVM for /dev/md1 ?
>
>        pvcreate /dev/md1
>        vgcreate myvg /dev/md1
>        lvcreate -L 10G -n root myvg
>        lvcreate -L 2G -n swap myvg
>        lvcreate -L 20G -n home myvg
>
> Also since you have specified raid0 for /dev/md1 you could
> use striping under LVM instead of RAID:
>
>        pvcreate /dev/sdb2 /dev/sdc2
>        vgcreate myvg /dev/sdb2 /dev/sdc2
>        lvcreate -L 10G -n root -i 2 -I 32 myvg
>        lvcreate -L 2G -n swap -i 2 -I 32 myvg
>        lvcreate -L 20G -n home -i 2 -I 32 myvg
>
> That way you have some flexibility in your use of sdb2
> and sdc2, e.g. you can move your LVs off sdc2 if you want
> to replace or extend it later.
>
> Nick.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ