lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 16 Apr 2007 22:18:45 +0200 (CEST)
From:	Tomasz Kłoczko <kloczek@...y.mif.pg.gda.pl>
To:	Diego Calleja <diegocg@...il.com>
cc:	Christoph Hellwig <hch@...radead.org>,
	Stefan Richter <stefanr@...6.in-berlin.de>,
	Jan Engelhardt <jengelh@...ux01.gwdg.de>,
	Mike Snitzer <snitzer@...il.com>, Neil Brown <neilb@...e.de>,
	"David R. Litwin" <presently42@...il.com>,
	linux-kernel@...r.kernel.org
Subject: Re: ZFS with Linux: An Open Plea

On Mon, 16 Apr 2007, Diego Calleja wrote:

> El Mon, 16 Apr 2007 17:46:50 +0200 (CEST), Tomasz Kłoczko <kloczek@...y.mif.pg.gda.pl> escribió:
>
>> also some other interestig numbers can be founnd on:
>> http://milek.blogspot.com/2006/08/hw-raid-vs-zfs-software-raid-part-ii.html
>
> So software raid can be faster than HW raid. News at 11.

Of cources it can be true in most cases (probably for some more advanced 
RAID controlers). Few weeks ago I perform some basic test on Dell 2950 
with 8x73GB SAS disk .. just as for kill time (waiting for access to some 
bigger box ;). This small iron box have inside RAID controller (Dell uses 
in this box LSI Logic SAS MegaRAID based ctrl). Anykind combinations on 
controler level RAID was slower than using this as plain JBOD with LVM or 
MD+LVM. Diffrence between HW and soft RAID was not so big (1-6% depending 
on configuration) but allways HW produces worser results (don't ask me 
why). Finaly I decide using this disk as four RAID1 luns only because 
under Linux I can't read each phisical disk SMART data and protecting this 
by RAID on controller level and collecting SNMP traps from DRAC card was 
kind of worakaround for this (in my case it will be better constanlty 
monitor disk healt and collesting some SMART data for observe trends on 
for example zabbix graphs for try predict some faults using triggers). On 
top of this was configured diffrent types of volumes on LVM level (some 
with stripping some without, some with bigger some with smaller chunk 
size).

But .. in case ZFS diffrences can be better visable.
Q: why ?
A: because most of HW RAID controlers (nevermind is it small/simple 
internal HW controler for DAS sotorage or advances in storage processor in 
dedicated FC array) are _optimized_ for classic FS workloads but .. ZFS 
uses bunch of devices in completly diffrent way/characteristics. If you 
will see flashing LEDs on box with disk first time configurad for classic 
RAID<any_level> (nevermind soft or hw) and on second time configured for 
ZFS you will see kind of organoleptic diffrence.
And yes .. ZFS may be kind of problem for some HW vendors ;)

kloczek
-- 
-----------------------------------------------------------
*Ludzie nie mają problemów, tylko sobie sami je stwarzają*
-----------------------------------------------------------
Tomasz Kłoczko, sys adm @zie.pg.gda.pl|*e-mail: kloczek@...y.mif.pg.gda.pl*

Powered by blists - more mailing lists