lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.WNT.4.64.0906152327070.7160@ppwaskie-MOBL2.amr.corp.intel.com>
Date:	Mon, 15 Jun 2009 23:42:16 -0700 (Pacific Daylight Time)
From:	"Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@...el.com>
To:	David Miller <davem@...emloft.net>
cc:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
	"Brandeburg, Jesse" <jesse.brandeburg@...el.com>
Subject: Re: flow director and packet ordering

On Mon, 15 Jun 2009, David Miller wrote:

> 
> Does the flow director implementation in IXGBE make sure to preserve
> packet ordering in some way?
> 
> You can't just change the RX flow destination queue whenever you like.
> 
> For example, if a transmit happens on cpu 1, then we start to receive
> some packets for that flow on that cpu, will a sudden set of transmits
> on another cpu make the flow director quickly switch the RX
> destination?

Not by design, no.  The mode of Flow Director that is currently in the 
driver is the hashing mode.  82599 also supports a perfect match mode, 
which isn't supported yet, since there's no good interface to program the 
5-tuple filters (I've been watching the ethtool work from the Sun guys for 
niu, but haven't had time to help).

Back to the hashing mode.  The idea is that a particular flow, say on CPU 
1, will be hashed.  There are two layers of hashing; the first 
will land the flow into a bucket, the second will land it into the 
filter lists.  The two hash values for the Tx packet are then written 
to the filter table along with the Rx queue.  There are two ways that the 
Rx queue for that flow could be moved: 1) The flow itself moves from CPU 1
to a different CPU.  This is the most likely case that will move the Rx 
flow.  2) We have a double collision on the hashes.  This really isn't 
very likely that a different flow will collide on both hashed values.

The reason we have two levels of hashing is to try and avoid this possible 
situation of a collision accidentally moving a different flow.

> 
> That would be very bad.

Agreed.

> 
> Without some extreme care that will result in potential packet
> reordering.
> 
> There needs to be some safey net in place in your silicon to wait for
> a significant "quiet time" in the RX stream before you can revector
> the RX flow destination setting.  Enough time to let the stack process
> all that packets for that flow received on the currently bound flow
> CPU.

The current Rx DMA's that have been posted must be completed before our 
hardware will apply the changes to the Rx filters.

> 
> To be honest, I'm not optimistic about this situation :-/

The way the internal hashing works in the software and hardware, I think 
we're pretty safe with getting moved unexpectedely.

Cheers,
-PJ Waskiewicz
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ