lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 03 Jan 2011 11:52:00 -0800
From:	Alexander Duyck <alexander.h.duyck@...el.com>
To:	Ben Hutchings <bhutchings@...arflare.com>
CC:	David Miller <davem@...emloft.net>,
	"therbert@...gle.com" <therbert@...gle.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [RFC PATCH 0/3] Simplified 16 bit Toeplitz hash algorithm

On 1/3/2011 11:30 AM, Ben Hutchings wrote:
> On Mon, 2011-01-03 at 11:02 -0800, David Miller wrote:
>> From: Tom Herbert<therbert@...gle.com>
>> Date: Mon, 3 Jan 2011 10:47:20 -0800
>>
>>> I'm not sure why this would be needed.  What is the a advantage in
>>> making the TX and RX queues match?
>>
>> That's how their hardware based RFS essentially works.
>>
>> Instead of watching for "I/O system calls" like we do in software, the
>> chip watches for which TX queue a flow ends up on and matches things
>> up on the receive side with the same numbered RX queue to match.
>
> ixgbe also implements IRQ affinity setting (or rather hinting) and TX
> queue selection by CPU, the inverse of IRQ affinity setting.  Together
> with the hardware/firmware Flow Director feature, this should indeed
> result in hardware RFS.  (However, irqbalanced does not yet follow the
> affinity hints AFAIK, so this requires some manual intervention.  Maybe
> the OOT driver is different?)
>
> The proposed change to make TX queue selection hash-based seems to be a
> step backwards.
>
> Ben.
>

Actually this code would only be applied in the case where Flow Director 
didn't apply such as non-TCP frames.  It would essentially guarantee 
that we end up with TX/RX on the same CPU for all cases instead of just 
when Flow Director matches a given flow.

The general idea is to at least keep the traffic local to one TX/RX 
queue pair so that if we cannot match the queue pair to the application, 
perhaps the application can be affinitized to match up with the queue 
pair.  Otherwise we end up with traffic getting routed to one TX queue 
on one CPU, and the RX being routed to another queue on perhaps a 
different CPU and it becomes quite difficult to match up the queues and 
the applications.

Since the approach is based on Toeplitz it can be applied to all 
hardware capable of generating a Toeplitz based hash and as a result it 
would likely also work in a much more vendor neutral kind of way than 
Flow Director currently does.

Thanks,

Alex
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ