lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 6 Jan 2014 09:11:24 +1100
From:	NeilBrown <neilb@...e.de>
To:	Vasiliy Tolstov <v.tolstov@...fip.ru>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: mdadm raid1 regression

On Fri, 27 Dec 2013 10:48:03 +0400 Vasiliy Tolstov <v.tolstov@...fip.ru>
wrote:

> 2013/4/22 NeilBrown <neilb@...e.de>:
> > I'll try to make sure that works correctly for the next release.
> > Thanks for the report.
> 
> 
> Sorry, Neil. for bumping up old thread. I'm again have problems with
> data-offset param for mdadm.
> I'm using version from git master (guthub). If i try to create raid1 like
> /sbin/mdadm --create --data-offset=2048 --metadata=1.2 --verbose
> --force --run --bitmap=internal --assume-clean --name=md21_901
> md21_901 --level=raid1 --raid-devices=2 /dev/mapper/sas00-21_901
> /dev/mapper/sas01-21_901
> I have
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 64e0028e:301aa3ce:cdf1a54f:a9e28f27
>            Name : xen25:md21_901  (local to host xen25)
>   Creation Time : Fri Dec 27 10:43:06 2013
>      Raid Level : raid1
>    Raid Devices : 2
> 
>  Avail Dev Size : 10489856 (5.00 GiB 5.37 GB)
>      Array Size : 5244928 (5.00 GiB 5.37 GB)
>     Data Offset : 4096 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=4008 sectors, after=0 sectors
>           State : clean
>     Device UUID : 38771de6:cb5f0dbc:9f32f85f:164e1e89
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Fri Dec 27 10:43:22 2013
>   Bad Block Log : 512 entries available at offset 72 sectors
>        Checksum : 7f07eb77 - correct
>          Events : 2
> 
> 
> But when i try to create raid1 like
> /sbin/mdadm --create --data-offset=1024 --metadata=1.2 --verbose
> --force --run --bitmap=internal --assume-clean --name=md21_901
> md21_901 --level=raid1 --raid-devices=2 /dev/mapper/sas00-21_901
> /dev/mapper/sas01-21_901
> I getting
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : ef22dca1:1424ea9e:1b4dce89:27c61a91
>            Name : xen25:md21_901  (local to host xen25)
>   Creation Time : Fri Dec 27 10:44:21 2013
>      Raid Level : raid1
>    Raid Devices : 2
> 
>  Avail Dev Size : 10491904 (5.00 GiB 5.37 GB)
>      Array Size : 5245952 (5.00 GiB 5.37 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=1960 sectors, after=0 sectors
>           State : clean
>     Device UUID : afae5e27:6c706246:4c3e3cb0:e5c726ac
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Fri Dec 27 10:44:26 2013
>   Bad Block Log : 512 entries available at offset 72 sectors
>        Checksum : 45be4cd1 - correct
>          Events : 2
> 
> 
> Why data offset specified in command line grows twice in resulting md
> array component?
> 

The value given to --data-offset is assumed to be kilobytes unless it has a
suffix: 'M' for megabytes, 's' for sectors.

The value reported by 'mdadm -D' is (as it says) in sectors.
1024 kilobytes  is 2048 sectors.
If you want to specify sectors, add an 's' suffix.

NeilBrown

Download attachment "signature.asc" of type "application/pgp-signature" (829 bytes)

Powered by blists - more mailing lists