[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090610.012342.121254416.davem@davemloft.net>
Date: Wed, 10 Jun 2009 01:23:42 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: therbert@...gle.com
Cc: netdev@...r.kernel.org
Subject: Re: [PATCH v2] Receive Packet Steering
From: Tom Herbert <therbert@...gle.com>
Date: Sun, 3 May 2009 21:03:01 -0700
> This is an update of the receive packet steering patch (RPS) based on received
> comments (thanks for all the comments). Improvements are:
>
> 1) Removed config option for the feature.
> 2) Made scheduling of backlog NAPI devices between CPUs lockless and much
> simpler.
> 3) Added new softirq to do defer sending IPIs for coalescing.
> 4) Imported hash from simple_rx_hash. Eliminates modulo operation to convert
> hash to index.
> 5) If no cpu is found for packet steering, then netif_receive_skb processes
> packet inline as before without queueing. In paritcular if RPS is not
> configured on a device the receive path is unchanged from current for
> NAPI devices (one additional conditional).
>
> Signed-off-by: Tom Herbert <therbert@...gle.com>
Just to keep this topic alive, I want to mention two things:
1) Just the other day support for the IXGBE "Flow Director" was
added to net-next-2.6, it basically does flow steering in
hardware. It remembers where the last TX for a flow was
made, and steers RX traffic there.
It's essentially a HW implementation of what we're proposing
here to do in software.
2) I'm steadily still trying to get struct sk_buff to the point
where we can replace the list handling implementation with a
standard "struct list_head" and thus union that with a
"struct call_single_data" so we can use remote cpu soft-irqs
for software packet flow steering.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists