[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1165249251.32724.26.camel@stevo-desktop>
Date: Mon, 04 Dec 2006 10:20:51 -0600
From: Steve Wise <swise@...ngridcomputing.com>
To: Roland Dreier <rdreier@...co.com>
Cc: Evgeniy Polyakov <johnpol@....mipt.ru>, netdev@...r.kernel.org,
openib-general@...nib.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 04/13] Connection Manager
On Mon, 2006-12-04 at 07:45 -0800, Roland Dreier wrote:
> > Could you convince network core developers that it is not own TCP
> > implementation which will mess with existing one?
>
> I'm not qualified to comment on this...
>
I don't understand your question?
> > This and a lot of other changes in this driver definitely says you
> > implement your own stack of protocols on top of infiniband hardware.
>
> ...but I do know this driver is for 10-gig ethernet HW.
>
There is no SW TCP stack in this driver. The HW supports RDMA over
TCP/IP/10GbE in HW and this is required for zero-copy RDMA over Ethernet
(aka iWARP). The device is a 10 GbE device, not Infiniband. The
Ethernet driver, upon which the rdma driver depends, acts both like a
traditional Ethernet NIC for the Linux stack as well as a TCP offload
device for the RDMA driver allowing establishment of RDMA connections.
The Connection Manager (patch 04/13) sends/receives messages from the
Ethernet driver that sets up HW TCP connections for doing RDMA. While
this is indeed implementing TCP offload, it is _not_ integrating it with
the sockets layer nor the linux stack and offloading sockets
connections. Its only supporting offload connections for the RDMA
driver to do iWARP. The Ammasso device is another example of this
(drivers/infiniband/hw/amso1100). Deep iSCSI adapters are another
example of this.
Steve.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists