lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65634d660906142254q4afb8f1ta63176817968c43d@mail.gmail.com>
Date:	Sun, 14 Jun 2009 22:54:07 -0700
From:	Tom Herbert <therbert@...gle.com>
To:	David Miller <davem@...emloft.net>
Cc:	netdev@...r.kernel.org
Subject: Re: [PATCH v2] Receive Packet Steering

On Wed, Jun 10, 2009 at 1:23 AM, David Miller<davem@...emloft.net> wrote:
> From: Tom Herbert <therbert@...gle.com>
> Date: Sun, 3 May 2009 21:03:01 -0700
>
>> This is an update of the receive packet steering patch (RPS) based on received
>> comments (thanks for all the comments).  Improvements are:
>>
>> 1) Removed config option for the feature.
>> 2) Made scheduling of backlog NAPI devices between CPUs lockless and much
>> simpler.
>> 3) Added new softirq to do defer sending IPIs for coalescing.
>> 4) Imported hash from simple_rx_hash.  Eliminates modulo operation to convert
>> hash to index.
>> 5) If no cpu is found for packet steering, then netif_receive_skb processes
>> packet inline as before without queueing.  In paritcular if RPS is not
>> configured on a device the receive path is unchanged from current for
>> NAPI devices (one additional conditional).
>>
>> Signed-off-by: Tom Herbert <therbert@...gle.com>
>
> Just to keep this topic alive, I want to mention two things:
>
> 1) Just the other day support for the IXGBE "Flow Director" was
>   added to net-next-2.6, it basically does flow steering in
>   hardware.  It remembers where the last TX for a flow was
>   made, and steers RX traffic there.
>
>   It's essentially a HW implementation of what we're proposing
>   here to do in software.
>

That's very cool.  Does it preserve in order delivery?

> 2) I'm steadily still trying to get struct sk_buff to the point
>   where we can replace the list handling implementation with a
>   standard "struct list_head" and thus union that with a
>   "struct call_single_data" so we can use remote cpu soft-irqs
>   for software packet flow steering.
>

I took another look at that an I have to wonder if it might be overly
complicated somehow.  Seems like this use of the call_single_data
structures would be essentially creating another type of skbuff list
than sk_buff_head (but without qlen which I think still may be
important).  I'm not sure that there's any less locking in that method
either.  What is the advantage over using a shared skbuff queue and
making doing a single IPI to schedule the backlog device on the remote
CPU?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ