lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e2e108260904071327u6b6c6e43tdc76f648be0302dd@mail.gmail.com>
Date:	Tue, 7 Apr 2009 22:27:36 +0200
From:	Bart Van Assche <bart.vanassche@...il.com>
To:	Tomasz Chmielewski <mangoo@...g.org>
Cc:	Vladislav Bolkhovitin <vst@...b.net>, linux-kernel@...r.kernel.org,
	linux-scsi@...r.kernel.org,
	iscsitarget-devel@...ts.sourceforge.net,
	James Bottomley <James.Bottomley@...senpartnership.com>,
	scst-devel <scst-devel@...ts.sourceforge.net>,
	stgt@...r.kernel.org
Subject: Re: [Scst-devel] [ANNOUNCE]: Comparison of features sets between 
	different SCSI targets (SCST, STGT, IET, LIO)

On Mon, Apr 6, 2009 at 8:27 PM, Tomasz Chmielewski <mangoo@...g.org> wrote:
> Note that crypt performance for SCST was worse than that of STGT for large
> read-ahead values.
> Also, SCST performance on crypt device was more or less the same with 256
> and 16384 readahead values. I wonder why performance didn't increase here
> while increasing readahead values? Could anyone recheck if it's the same on
> some other system?

I have repeated the test for the non-encrypted case. Setup details:
* target: 2.6.29.1 kernel, 64-bit, Intel E8400 CPU @ 3 GHz, 4 GB RAM,
two ST3250410AS disks, with /dev/md3 set up in RAID-1 with a stripe
size of 32 KB, local reading speed of /dev/md3: 120 MB/s, I/O
scheduler: CFQ.
* initiator: 2.6.28.7 kernel, 64-bit, Intel E6750 CPU @ 2.66 GHz, 2 GB RAM.
* network: 1 Gbit/s Ethernet, two systems connected back to back via a
crossed cable.

Each test was repeated four times. Before each test the target caches
were dropped via the command "sync; echo 3 >
/proc/sys/vm/drop_caches". The following test has been run on the
initiator:

sync; echo 3 > /proc/sys/vm/drop_caches; dd if=/dev/sdb of=/dev/null
bs=64K count=100000

Results with read-ahead set to 256 on the initiator, in MB/s:

STGT 56.7 +/- 0.3
SCST 56.9 +/- 1.1

Results with read-ahead set to 16384 on the initiator, in MB/s:

STGT 59.9 +/- 0.1
SCST 59.5 +/- 0.0

Or: slightly better results with the larger read-ahead value, and a
performance difference well below 1% between the STGT and SCST
performance results.

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ