lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e2e108260801302348h5c139da3r7ec262cd48e956@mail.gmail.com>
Date:	Thu, 31 Jan 2008 08:48:26 +0100
From:	"Bart Van Assche" <bart.vanassche@...il.com>
To:	"FUJITA Tomonori" <tomof@....org>
Cc:	fujita.tomonori@....ntt.co.jp, rdreier@...co.com,
	James.Bottomley@...senpartnership.com,
	torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
	vst@...b.net, linux-scsi@...r.kernel.org,
	scst-devel@...ts.sourceforge.net, linux-kernel@...r.kernel.org
Subject: Re: Integration of SCST in the mainstream Linux kernel

On Jan 30, 2008 2:54 PM, FUJITA Tomonori <tomof@....org> wrote:
> On Wed, 30 Jan 2008 14:10:47 +0100
> "Bart Van Assche" <bart.vanassche@...il.com> wrote:
>
> > On Jan 30, 2008 11:56 AM, FUJITA Tomonori <tomof@....org> wrote:
> > >
> > > Sorry, I can't say. I don't know much about iSER. But seems that Pete
> > > and Robin can get the better I/O performance - line speed ratio with
> > > STGT.
> >
> > Robin Humble was using a DDR InfiniBand network, while my tests were
> > performed with an SDR InfiniBand network. Robin's results can't be
> > directly compared to my results.
>
> I know that you use different hardware. I used 'ratio' word.

Let's start with summarizing the relevant numbers from Robin's
measurements and my own measurements.

Maximum bandwidth of the underlying physical medium: 2000 MB/s for a
DDR 4x InfiniBand network and 1000 MB/s for a SDR 4x InfiniBand
network.

Maximum bandwidth reported by the OFED ib_write_bw test program: 1473
MB/s for Robin's setup and 933 MB/s for my setup. These numbers match
published ib_write_bw results (see e.g. figure 11 in
http://www.usenix.org/events/usenix06/tech/full_papers/liu/liu_html/index.html
or chapter 7 in
http://www.voltaire.com/ftp/rocks/HCA-4X0_Linux_GridStack_4.3_Release_Notes_DOC-00171-A00.pdf)

Throughput measured for communication via STGT + iSER to a remote RAM
disk via direct I/O with dd: 800 MB/s for writing and 751 MB/s for
reading in Robin's setup, and 647 MB/s for writing and 589 MB/s for
reading in my setup.

>From this we can compute the I/O-performance to ib_write_bw bandwidth:
54 % for writing and 51 % for reading in Robin's setup, and 69 % for
writing and 63 % for reading in my setup. Or a slightly better
utilization of the bandwidth in my setup than in Robin's setup. This
is no surprise -- the faster a communication link is, the harder it is
to use all of the available bandwidth.

So why did you state that in Robin's tests the I/O performance to line
speed ratio was better than in my tests ?

Bart Van Assche.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ