lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 24 Jan 2010 19:49:33 +0100
From:	"Ing. Daniel RozsnyĆ³" <daniel@...snyo.com>
To:	linux-kernel@...r.kernel.org
Subject: bio too big - in nested raid setup

Hello,
   I am having troubles with nested RAID - when one array is added to 
the other, the "bio too big device md0" messages are appearing:

bio too big device md0 (144 > 8)
bio too big device md0 (248 > 8)
bio too big device md0 (32 > 8)

   From internet searches I've found no solution or error like mine, 
just a note about data corruption when this is happening.

Description:

   My setup is the following - one 2TB and four 500GB drives. The goal 
is to have a mirror of the 2TB drive to a linear array of the other four 
drives.

   So.. the state without the error above is this:

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active linear sdb1[0] sde1[3] sdd1[2] sdc1[1]
       1953535988 blocks super 1.1 0k rounding

md0 : active raid1 sda2[0]
       1953447680 blocks [2/1] [U_]
       bitmap: 233/233 pages [932KB], 4096KB chunk

unused devices: <none>

   With these block request sizes:

# cat /sys/block/md{0,1}/queue/max_{,hw_}sectors_kb
127
127
127
127

   Now, I add the four drive array to the mirror - and the system starts 
showing the bio error at any significant disk activity..  (probably 
writes only). The reboot/shutdown process is full of these errors.

   The step which messes up the system (ignore re-added, it happened the 
very first time I've constructed the 4 drive array a hour ago):

# mdadm /dev/md0 --add /dev/md1
mdadm: re-added /dev/md1

# cat /sys/block/md{0,1}/queue/max_{,hw_}sectors_kb
4
4
127
127

The dmesg is just showing this:

md: bind<md1>
RAID1 conf printout:
  --- wd:1 rd:2
  disk 0, wo:0, o:1, dev:sda2
  disk 1, wo:1, o:1, dev:md1
md: recovery of RAID array md0
md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 
KB/sec) for recovery.
md: using 128k window, over a total of 1953447680 blocks.


   And as soon as a write occures to the array:

bio too big device md0 (40 > 8)

   The removal of md1 from md0 does not help the situation, I need to 
reboot the machine.

   The md0 array bears LVM and inside it a root / swap / portage / 
distfiles and home logical volumes.

   My system is:

# uname -a
Linux desktop 2.6.32-gentoo-r1 #2 SMP PREEMPT Sun Jan 24 12:06:13 CET 
2010 i686 Intel(R) Xeon(R) CPU X3220 @ 2.40GHz GenuineIntel GNU/Linux


Thanks for any help,

Daniel

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ