lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1202262200.2220.118.camel@haakon2.linux-iscsi.org>
Date:	Tue, 05 Feb 2008 17:43:20 -0800
From:	"Nicholas A. Bellinger" <nab@...ux-iscsi.org>
To:	Vladislav Bolkhovitin <vst@...b.net>
Cc:	Jeff Garzik <jeff@...zik.org>, Alan Cox <alan@...rguk.ukuu.org.uk>,
	Mike Christie <michaelc@...wisc.edu>,
	linux-scsi@...r.kernel.org,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	James Bottomley <James.Bottomley@...senPartnership.com>,
	scst-devel@...ts.sourceforge.net,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
	Julian Satran <Julian_Satran@...ibm.com>
Subject: Re: Integration of SCST in the mainstream Linux kernel

On Tue, 2008-02-05 at 16:11 -0800, Nicholas A. Bellinger wrote:
> On Tue, 2008-02-05 at 22:21 +0300, Vladislav Bolkhovitin wrote:
> > Jeff Garzik wrote:
> > >>> iSCSI is way, way too complicated. 
> > >>
> > >> I fully agree. From one side, all that complexity is unavoidable for 
> > >> case of multiple connections per session, but for the regular case of 
> > >> one connection per session it must be a lot simpler.
> > > 
> > > Actually, think about those multiple connections...  we already had to 
> > > implement fast-failover (and load bal) SCSI multi-pathing at a higher 
> > > level.  IMO that portion of the protocol is redundant:   You need the 
> > > same capability elsewhere in the OS _anyway_, if you are to support 
> > > multi-pathing.
> > 
> > I'm thinking about MC/S as about a way to improve performance using 
> > several physical links. There's no other way, except MC/S, to keep 
> > commands processing order in that case. So, it's really valuable 
> > property of iSCSI, although with a limited application.
> > 
> > Vlad
> > 
> 
> Greetings,
> 
> I have always observed the case with LIO SE/iSCSI target mode (as well
> as with other software initiators we can leave out of the discussion for
> now, and congrats to the open/iscsi on folks recent release. :-) that
> execution core hardware thread and inter-nexus per 1 Gb/sec ethernet
> port performance scales up to 4x and 2x core x86_64 very well with
> MC/S).  I have been seeing 450 MB/sec using 2x socket 4x core x86_64 for
> a number of years with MC/S.  Using MC/S on 10 Gb/sec (on PCI-X v2.0
> 266mhz as well, which was the first transport that LIO Target ran on
> that was able to reach handle duplex ~1200 MB/sec with 3 initiators and
> MC/S.  In the point to point 10 GB/sec tests on IBM p404 machines, the
> initiators where able to reach ~910 MB/sec with MC/S.  Open/iSCSI was
> able to go a bit faster (~950 MB/sec) because it uses struct sk_buff
> directly. 
> 
 
Sorry, these where IBM p505 express (not p404, duh) which had a 2x
socket 2x core POWER5 setup.  These along with an IBM X-series machine)
where the only ones available for PCI-X v2.0, and this probably is still
the case. :-)

Also, these numbers where with a ~9000 MTU (I don't recall what the
hardware limit on the 10 Gb/sec switch lwas) doing direct struct iovec
to preallocated struct page mapping for payload on the target side.
This is known as RAMDISK_DR plugin in the LIO-SE.  On the initiator, LTP
disktest and O_DIRECT where used for direct to SCSI block device access.

I can big up this paper if anyone is interested.

--nab

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ