lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <462657E3.3000004@garzik.org>
Date:	Wed, 18 Apr 2007 13:39:47 -0400
From:	Jeff Garzik <jeff@...zik.org>
To:	Lennart Sorensen <lsorense@...lub.uwaterloo.ca>
CC:	Tomasz K?oczko <kloczek@...y.mif.pg.gda.pl>,
	Diego Calleja <diegocg@...il.com>,
	Christoph Hellwig <hch@...radead.org>,
	Stefan Richter <stefanr@...6.in-berlin.de>,
	Jan Engelhardt <jengelh@...ux01.gwdg.de>,
	Mike Snitzer <snitzer@...il.com>, Neil Brown <neilb@...e.de>,
	"David R. Litwin" <presently42@...il.com>,
	linux-kernel@...r.kernel.org
Subject: Re: ZFS with Linux: An Open Plea

Lennart Sorensen wrote:
> On Mon, Apr 16, 2007 at 10:18:45PM +0200, Tomasz K?oczko wrote:
>> Of cources it can be true in most cases (probably for some more advanced 
>> RAID controlers). Few weeks ago I perform some basic test on Dell 2950 
>> with 8x73GB SAS disk .. just as for kill time (waiting for access to some 
>> bigger box ;). This small iron box have inside RAID controller (Dell uses 
>> in this box LSI Logic SAS MegaRAID based ctrl). Anykind combinations on 
>> controler level RAID was slower than using this as plain JBOD with LVM or 
>> MD+LVM. Diffrence between HW and soft RAID was not so big (1-6% depending 
>> on configuration) but allways HW produces worser results (don't ask me 
>> why). Finaly I decide using this disk as four RAID1 luns only because 
>> under Linux I can't read each phisical disk SMART data and protecting this 
>> by RAID on controller level and collecting SNMP traps from DRAC card was 
>> kind of worakaround for this (in my case it will be better constanlty 
>> monitor disk healt and collesting some SMART data for observe trends on 
>> for example zabbix graphs for try predict some faults using triggers). On 
>> top of this was configured diffrent types of volumes on LVM level (some 
>> with stripping some without, some with bigger some with smaller chunk 
>> size).
> 
> Does it matter that google's recent report on disk failures indicated
> that SMART never predicted anything useful as far as they could tell?
> Certainly none of my drive failures ever had SMART make any kind of
> indication that anything was wrong.
> 
> I think the main benefit of MD raid, is that it is portable, doesn't
> lock you into a specific piece of hardware, and you can span multiple
> controllers, and it is likely easier to have bugs in MD raid fixed that
> in some raid controller's firmware if any were to be found.  Performance
> advantages are a bonus of course.

SMART largely depends on how you use it.  Simply polling the current 
status will not give you all the benefits SMART provides.  On the 
dedicated servers that I rent, running the extended test ('-t long') 
often finds problems before you start losing data, or deal with a drive 
death.  Certainly not a huge sample size, but it backs up what I hear in 
the field.  Running the SMART tests on a weekly basis seems most 
effective, though you'll want to stagger the tests if running in a RAID set.

	Jeff



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ