lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1202236627.3133.55.camel@localhost.localdomain>
Date:	Tue, 05 Feb 2008 12:37:07 -0600
From:	James Bottomley <James.Bottomley@...senPartnership.com>
To:	"Nicholas A. Bellinger" <nab@...ux-iscsi.org>
Cc:	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Vladislav Bolkhovitin <vst@...b.net>,
	Bart Van Assche <bart.vanassche@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
	linux-scsi@...r.kernel.org, scst-devel@...ts.sourceforge.net,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Mike Christie <michaelc@...wisc.edu>,
	Julian Satran <Julian_Satran@...ibm.com>
Subject: Re: Integration of SCST in the mainstream Linux kernel

This email somehow didn't manage to make it to the list (I suspect
because it had html attachments).

James

---

                              From: 
Julian Satran
<Julian_Satran@...ibm.com>
                                To: 
Nicholas A. Bellinger
<nab@...ux-iscsi.org>
                                Cc: 
Andrew Morton
<akpm@...ux-foundation.org>, Alan
Cox <alan@...rguk.ukuu.org.uk>, Bart
Van Assche
<bart.vanassche@...il.com>, FUJITA
Tomonori
<fujita.tomonori@....ntt.co.jp>,
James Bottomley
<James.Bottomley@...senPartnership.com>, ...
                           Subject: 
Re: Integration of SCST in the
mainstream Linux kernel
                              Date: 
Mon, 4 Feb 2008 21:31:48 -0500
(20:31 CST)


Well stated. In fact the "layers" above ethernet do provide the services 
that make the TCP/IP stack compelling - a whole complement of services.
ALL services required (naming, addressing, discovery, security etc.) will 
have to be recreated if you take the FcOE route. That makes good business 
for some but not necessary for the users. Those services BTW are not on 
the data path and are not "overhead".
The TCP/IP stack pathlength is decently low. What makes most 
implementations poor is that they where naively extended in the SMP world. 
Recent implementations (published) from IBM and Intel show excellent 
performance (4-6 times the regular stack). I do not have unfortunately 
latency numbers (as the community major stress has been throughput) but I 
assume that RDMA (not necessarily hardware RDMA) and/or the use of 
infiniband or latency critical applications - within clusters may be the 
ultimate low latency solution. Ethernet has some inherent latency issues 
(the bridges) that are inherited by anything on ethernet (FcOE included). 
The IP protocol stack is not inherently slow but some implementations are 
somewhat sluggish.
But instead of replacing them with new and half backed contraptions we 
would be all better of improving what we have and understand.

In the whole debate of around FcOE I heard a single argument that may have 
some merit - building convertors iSCSI-FCP to support legacy islands of 
FCP (read storage products that do not support iSCSI natively) is 
expensive. It is correct technically - only that FcOE eliminates an 
expense at the wrong end of the wire - it reduces the cost of the storage 
box at the expense of added cost at the server (and usually there a many 
servers using a storage box). FcOE vendors are also bound to provide FCP 
like services for FcOE - naming, security, discovery etc. - that do not 
exist on Ethernet. It is a good business for FcOE vendors - a duplicate 
set of solution for users.

It should be apparent by now that if one speaks about a "converged" 
network we should speak about an IP network and not about Ethernet.
If we take this route we might get perhaps also to an "infrastructure 
physical variants" that support very low latency better than ethernet and 
we might be able to use them with the same "stack" - a definite forward 
looking solution.

IMHO it is foolish to insist on throwing away the whole stack whenever we 
make a slight improvement in the physical layer of the network. We have a 
substantial investment and body of knowledge in the protocol stack and 
nothing proposed improves on it - obviously not as in its total level of 
service nor in performance.

Julo

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ