lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 May 2007 09:44:10 +1200
From:	"Jeff Zheng" <Jeff.Zheng@...ace.com>
To:	<david@...g.hm>, "Andreas Dilger" <adilger@...sterfs.com>
Cc:	<linux-kernel@...r.kernel.org>, <linux-fsdevel@...r.kernel.org>
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB

 
You will definitely meet the same problem. As very large hardware disk
becomes more and more popular, this will become a big issue for software
raid. 


Jeff

-----Original Message-----
From: david@...g.hm [mailto:david@...g.hm] 
Sent: Thursday, 17 May 2007 6:04 a.m.
To: Andreas Dilger
Cc: Jeff Zheng; linux-kernel@...r.kernel.org;
linux-fsdevel@...r.kernel.org
Subject: Re: Software raid0 will crash the file-system, when each disk
is 5TB


my experiance is taht if you don't have CONFIG_LBD enabled then the
kernel will report the larger disk as 2G and everything will work, you
just won't get all the space.

plus he seems to be crashing around 500G of data

and finally (if I am reading the post correctly) if he configures the
drives as 4x2.2TB=11TB instead of 2x5.5TB=11TB he doesn't have the same
problem.

I'm getting ready to setup a similar machine that will have 3x10TB (3 15
disk arrays with 750G drives), but won't be ready to try this for a few
more days.

David Lang
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ