lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 04 Feb 2008 15:12:25 -0800
From:	"Nicholas A. Bellinger" <nab@...ux-iscsi.org>
To:	James Bottomley <James.Bottomley@...senPartnership.com>
Cc:	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Vladislav Bolkhovitin <vst@...b.net>,
	Bart Van Assche <bart.vanassche@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
	linux-scsi@...r.kernel.org, scst-devel@...ts.sourceforge.net,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Mike Christie <michaelc@...wisc.edu>,
	Julian Satran <Julian_Satran@...ibm.com>
Subject: Re: Integration of SCST in the mainstream Linux kernel

On Mon, 2008-02-04 at 17:00 -0600, James Bottomley wrote:
> On Mon, 2008-02-04 at 22:43 +0000, Alan Cox wrote:
> > > better. So for example, I personally suspect that ATA-over-ethernet is way 
> > > better than some crazy SCSI-over-TCP crap, but I'm biased for simple and 
> > > low-level, and against those crazy SCSI people to begin with.
> > 
> > Current ATAoE isn't. It can't support NCQ. A variant that did NCQ and IP
> > would probably trash iSCSI for latency if nothing else.
> 
> Actually, there's also FCoE now ... which is essentially SCSI
> encapsulated in Fibre Channel Protocols (FCP) running over ethernet with
> Jumbo frames.  It does the standard SCSI TCQ, so should answer all the
> latency pieces.  Intel even has an implementation:
> 
> http://www.open-fcoe.org/
> 
> I tend to prefer the low levels as well.  The whole disadvantage for IP
> as regards iSCSI was the layers of protocols on top of it for
> addressing, authenticating, encrypting and finding any iSCSI device
> anywhere in the connected universe.

Btw, while simple in-band discovery of iSCSI exists, the standards based
IP storage deployments (iSCSI and iFCP) use iSNS (RFC-4171) for
discovery and network fabric management, for things like sending state
change notifications when a particular network portal is going away so
that the initiator can bring up a different communication patch to a
different network portal, etc.

> 
> I tend to see loss of routing from operating at the MAC level to be a
> nicely justifiable tradeoff (most storage networks tend to be hubbed or
> switched anyway).  Plus an ethernet MAC with jumbo frames is a large
> framed nearly lossless medium, which is practically what FCP is
> expecting.  If you really have to connect large remote sites ... well
> that's what tunnelling bridges are for.
> 

Some of the points by Julo on the IPS TWG iSCSI vs. FCoE thread:

      * the network is limited in physical span and logical span (number
        of switches)
      * flow-control/congestion control is achieved with a mechanism
        adequate for a limited span network (credits). The packet loss
        rate is almost nil and that allows FCP to avoid using a
        transport (end-to-end) layer
      * FCP she switches are simple (addresses are local and the memory
        requirements cam be limited through the credit mechanism)
      * The credit mechanisms is highly unstable for large networks
        (check switch vendors planning docs for the network diameter
        limits) – the scaling argument
      * Ethernet has no credit mechanism and any mechanism with a
        similar effect increases the end point cost. Building a
        transport layer in the protocol stack has always been the
        preferred choice of the networking community – the community
        argument
      * The "performance penalty" of a complete protocol stack has
        always been overstated (and overrated). Advances in protocol
        stack implementation and finer tuning of the congestion control
        mechanisms make conventional TCP/IP performing well even at 10
        Gb/s and over. Moreover the multicore processors that become
        dominant on the computing scene have enough compute cycles
        available to make any "offloading" possible as a mere code
        restructuring exercise (see the stack reports from Intel, IBM
        etc.)
      * Building on a complete stack makes available a wealth of
        operational and management mechanisms built over the years by
        the networking community (routing, provisioning, security,
        service location etc.) – the community argument
      * Higher level storage access over an IP network is widely
        available and having both block and file served over the same
        connection with the same support and management structure is
        compelling– the community argument
      * Highly efficient networks are easy to build over IP with optimal
        (shortest path) routing while Layer 2 networks use bridging and
        are limited by the logical tree structure that bridges must
        follow. The effort to combine routers and bridges (rbridges) is
        promising to change that but it will take some time to finalize
        (and we don't know exactly how it will operate). Untill then the
        scale of Layer 2 network is going to seriously limited – the
        scaling argument

Perhaps it would be of worth to get some more linux-net guys in on the
discussion.  :-)

--nab


> James
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ