lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 6 Apr 2009 21:01:47 +0200
From:	Bart Van Assche <bart.vanassche@...il.com>
To:	Tomasz Chmielewski <mangoo@...g.org>
Cc:	Vladislav Bolkhovitin <vst@...b.net>, linux-kernel@...r.kernel.org,
	linux-scsi@...r.kernel.org,
	iscsitarget-devel@...ts.sourceforge.net,
	James Bottomley <James.Bottomley@...senpartnership.com>,
	scst-devel <scst-devel@...ts.sourceforge.net>,
	stgt@...r.kernel.org
Subject: Re: [Scst-devel] [ANNOUNCE]: Comparison of features sets between 
	different SCSI targets (SCST, STGT, IET, LIO)

On Mon, Apr 6, 2009 at 12:29 PM, Tomasz Chmielewski <mangoo@...g.org> wrote:
> The target is running Debian Lenny 64bit userspace on an Intel Celeron
> 2.93GHz CPU, 2 GB RAM.
>
> Initiator is running Debian Etch 64 bit userspace, open-iscsi 2.0-869, Intel
> Xeon 3050/2.13GHz, 8 GB RAM.
>
>
> Each test was repeated 6 times, "sync" was made and caches were dropped on
> both sides before each test was started.
>
> dd parameters were like below, so 6.6 GB of data was read each time:
>
> dd if=/dev/sdag of=/dev/null bs=64k count=100000
>
>
> Data was read from two block devices:
> - /dev/md0, which is RAID-1 on two ST31500341AS 1.5 TB drives
> - encrypted dm-crypt device which is on top of /dev/md0
>
> Encrypted device was created with the following additional options passed to
> cryptsetup
> (it provides the most performance on systems where CPU is a bottleneck, but
> with decreased
> security when compared to default options):
>
> -c aes-ecb-plain -s 128
>
>
> Generally, CPU on the target was a bottleneck, so I also tested the load on
> target.
>
>
> md0, crypt columns - averages from dd
> us, sy, id, wa - averages from vmstat
>
>
> 1. Disk speeds on the target
>
> Raw performance: 102.17 MB/s
> Raw performance (encrypted):  50.21 MB/s
>
>
> 2. Read-ahead on the initiator: 256 (default); md0, crypt - MB/s
>
>                          md0   us  sy  id  wa  | crypt   us  sy  id  wa
>  STGT                      50.63   4% 45% 18% 33% | 32.52    3% 62% 16% 19%
> SCST (debug + no patches) 43.75   0% 26% 30% 44% | 42.05    0% 84%  1% 15%
> SCST (fullperf + patches) 45.18   0% 25% 33% 42% | 44.12    0% 81%  2% 17%

Hello Tomasz,

How is it possible that for this test the read performance through
STGT (50.63 MB/s) was higher than the read performance on the target
(50.21 MB/s) ? Are you sure that all read buffers were flushed before
this test was started ?

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ