[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e2e108260802010108h4595040aq236db9c6d0476571@mail.gmail.com>
Date: Fri, 1 Feb 2008 10:08:28 +0100
From: "Bart Van Assche" <bart.vanassche@...il.com>
To: "Nicholas A. Bellinger" <nab@...ux-iscsi.org>
Cc: "Vladislav Bolkhovitin" <vst@...b.net>,
"FUJITA Tomonori" <tomof@....org>, fujita.tomonori@....ntt.co.jp,
rdreier@...co.com, James.Bottomley@...senpartnership.com,
torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
linux-scsi@...r.kernel.org, scst-devel@...ts.sourceforge.net,
linux-kernel@...r.kernel.org
Subject: Re: Integration of SCST in the mainstream Linux kernel
On Jan 31, 2008 7:15 PM, Nicholas A. Bellinger <nab@...ux-iscsi.org> wrote:
>
> I meant small referring to storage on IB fabrics which has usually been
> in the research and national lab settings, with some other vendors
> offering IB as an alternative storage fabric for those who [w,c]ould not
> wait for 10 Gb/sec copper Ethernet and Direct Data Placement to come
> online. These types of numbers compared to say traditional iSCSI, that
> is getting used all over the place these days in areas I won't bother
> listing here.
InfiniBand has several advantages over 10 Gbit/s Ethernet (the list
below probably isn't complete):
- Lower latency. Communication latency is not only determined by the
latency of a switch. The whole InfiniBand protocol stack was designed
with low latency in mind. Low latency is really important for database
software that accesses storage over a network.
- High-availability is implemented at the network layer. Suppose that
a group of servers has dual-port network interfaces and is
interconnected via a so-called dual star topology, With an InfiniBand
network, failover in case of a single failure (link or switch) is
handled without any operating system or application intervention. With
Ethernet, failover in case of a single failure must be handled either
by the operating system or by the application.
- You do not have to use iSER or SRP to use the bandwidth of an
InfiniBand network effectively. The SDP (Sockets Direct Protocol)
makes it possible that applications benefit from RDMA by using the
very classic IPv4 Berkeley sockets interface. An SDP implementation in
software is already available today via OFED. iperf reports 470 MB/s
on single-threaded tests and 975 MB/s for a performance test with two
threads on an SDR 4x InfiniBand network. These tests were performed
with the OFED 1.2.5.4 SDP implementation. It is possible that future
SDP implementations will perform even better. (Note: I could not yet
get iSCSI over SDP working.)
We should leave the choice of networking technology open -- both
Ethernet and InfiniBand have specific advantages.
See also:
InfiniBand Trade Association, InfiniBand Architecture Specification
Release 1.2.1, http://www.infinibandta.org/specs/register/publicspec/
Bart Van Assche.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists