lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 28 Jan 2010 10:24:43 +0100
From:	"Ing. Daniel RozsnyĆ³" <daniel@...snyo.com>
To:	Neil Brown <neilb@...e.de>
CC:	Milan Broz <mbroz@...hat.com>, Marti Raudsepp <marti@...fo.org>,
	linux-kernel@...r.kernel.org
Subject: Re: bio too big - in nested raid setup

Neil Brown wrote:
> On Mon, 25 Jan 2010 19:27:53 +0100
> Milan Broz <mbroz@...hat.com> wrote:
> 
>> On 01/25/2010 04:25 PM, Marti Raudsepp wrote:
>>> 2010/1/24 "Ing. Daniel RozsnyĆ³" <daniel@...snyo.com>:
>>>> Hello,
>>>>  I am having troubles with nested RAID - when one array is added to the
>>>> other, the "bio too big device md0" messages are appearing:
>>>>
>>>> bio too big device md0 (144 > 8)
>>>> bio too big device md0 (248 > 8)
>>>> bio too big device md0 (32 > 8)
>>> I *think* this is the same bug that I hit years ago when mixing
>>> different disks and 'pvmove'
>>>
>>> It's a design flaw in the DM/MD frameworks; see comment #3 from Milan Broz:
>>> http://bugzilla.kernel.org/show_bug.cgi?id=9401#c3
>> Hm. I don't think it is the same problem, you are only adding device to md array...
>> (adding cc: Neil, this seems to me like MD bug).
>>
>> (original report for reference is here http://lkml.org/lkml/2010/1/24/60 )
> 
> No, I think it is the same problem.
> 
> When you have a stack of devices, the top level client needs to know the
> maximum restrictions imposed by lower level devices to ensure it doesn't
> violate them.
> However there is no mechanism for a device to report that its restrictions
> have changed.
> So when md0 gains a linear leg and so needs to reduce the max size for
> requests, there is no way to tell DM, so DM doesn't know.  And as the
> filesystem only asks DM for restrictions, it never finds out about the
> new restrictions.

Neil, why does it even reduce its block size? I've tried with both 
"linear" and "raid0" (as they are the only way to get 2T from 4x500G) 
and both behave the same (sda has 512, md0 127, linear 127 and raid0 has 
512 kb block size).

I do not see the mechanism how 512:127 or 512:512 leads to 4 kb limit

Is it because:
  - of rebuilding the array?
  - of non-multiplicative max block size
  - of non-multiplicative total device size
  - of nesting?
  - of some other fallback to 1 page?

I ask because I can not believe that a pre-assembled nested stack would 
result in 4kb max limit. But I haven't tried yet (e.g. from a live cd).

The block device should not do this kind of "magic", unless the higher 
layers support it. Which one has proper support then?
  - standard partition table?
  - LVM?
  - filesystem drivers?

> This should be fixed by having the filesystem not care about restrictions,
> and the lower levels just split requests as needed, but that just hasn't
> happened....
> 
> If you completely assemble md0 before activating the LVM stuff on top of it,
> this should work.
> 
> NeilBrown

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ