lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1165335463.16087.83.camel@stevo-desktop>
Date:	Tue, 05 Dec 2006 10:17:43 -0600
From:	Steve Wise <swise@...ngridcomputing.com>
To:	Evgeniy Polyakov <johnpol@....mipt.ru>
Cc:	netdev@...r.kernel.org, Roland Dreier <rdreier@...co.com>,
	linux-kernel@...r.kernel.org, openib-general@...nib.org
Subject: Re: [openib-general] [PATCH  v2 04/13] Connection Manager

On Tue, 2006-12-05 at 10:12 -0600, Steve Wise wrote:
> On Tue, 2006-12-05 at 18:59 +0300, Evgeniy Polyakov wrote:
> > On Tue, Dec 05, 2006 at 09:39:58AM -0600, Steve Wise (swise@...ngridcomputing.com) wrote:
> > > > Phrases like "MPA-aware TCP" rises a lot of questions - briefly saying
> > > > that hardware (even if it is called ethernet driver) can create and work
> > > > with own TCP flows potentially modified in the way it likes which is seen 
> > > > in driver. Likely such flows will not be seen by upper layers like OS 
> > > > network stack according to hardware descriptions.
> > > > 
> > > > Is it correct?
> > > > 
> > > 
> > > I don't quite get your point about the driver aspect of this?
> > > 
> > > The HW manages the iWARP connection including data flow.  It adheres to
> > > the MPA, RDDP, and RDMAP protocol specification IDs from the IETF.  The
> > > HW manages how data gets pushed out in the RDMA stream.   The RDMA
> > > Driver just requests a TCP connection and does the MPA exchange.  Then
> > > tells the hardware to move the connection into RDMA mode.  From that
> > > point on, the driver simply suffles IO work requests from the consumer
> > > application to the hardware and handles asynchronous events while the
> > > connection is up and running.
> > 
> > My main concern about this is the fact, that protocol handling is
> > splitted into SF and HW parts, and actually until negotiation is
> > completed those parts are completely unrelated to each other, so
> > requested TCP connection can leak into main stack and main stack can
> > send some packets which can be considered as MPA negotiation.
> > 
> 
> Ah.  Data from an offloaded connection cannot leak into the main stack
> nor vice-verse.  We can take an active RDMA connection establishment as
> an example if you want:  Once the message is sent to the HW to "setup a
> TCP connection from addr/port a.b to addr/port c.d", then packets on
> that connection (that 4-tuple) will always be delivered to the RDMA
> driver, not the native stack.  If the the packet received after the
> connection is setup is -not- an MPA reply (in this example), then the
> connection is aborted.  Once the connection is aborted.  
                                                       ^ the 4 tuple can
then be reused for rdma or native stack tcp connections.



-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ