[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1268429319.2947.10.camel@edumazet-laptop>
Date: Fri, 12 Mar 2010 22:28:39 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: Tom Herbert <therbert@...gle.com>
Cc: davem@...emloft.net, netdev@...r.kernel.org
Subject: Re: [PATCH v7] rps: Receive Packet Steering
Le vendredi 12 mars 2010 à 12:13 -0800, Tom Herbert a écrit :
> This patch implements software receive side packet steering (RPS). RPS
> distributes the load of received packet processing across multiple CPUs.
>
> Problem statement: Protocol processing done in the NAPI context for received
> packets is serialized per device queue and becomes a bottleneck under high
> packet load. This substantially limits pps that can be achieved on a single
> queue NIC and provides no scaling with multiple cores.
>
> This solution queues packets early on in the receive path on the backlog queues
> of other CPUs. This allows protocol processing (e.g. IP and TCP) to be
> performed on packets in parallel. For each device (or each receive queue in
> a multi-queue device) a mask of CPUs is set to indicate the CPUs that can
> process packets. A CPU is selected on a per packet basis by hashing contents
> of the packet header (e.g. the TCP or UDP 4-tuple) and using the result to index
> into the CPU mask. The IPI mechanism is used to raise networking receive
> softirqs between CPUs. This effectively emulates in software what a multi-queue
> NIC can provide, but is generic requiring no device support.
>
> Many devices now provide a hash over the 4-tuple on a per packet basis
> (e.g. the Toeplitz hash). This patch allow drivers to set the HW reported hash
> in an skb field, and that value in turn is used to index into the RPS maps.
> Using the HW generated hash can avoid cache misses on the packet when
> steering it to a remote CPU.
>
> The CPU mask is set on a per device and per queue basis in the sysfs variable
> /sys/class/net/<device>/queues/rx-<n>/rps_cpus. This is a set of canonical
> bit maps for receive queues in the device (numbered by <n>). If a device
> does not support multi-queue, a single variable is used for the device (rx-0).
>
> Generally, we have found this technique increases pps capabilities of a single
> queue device with good CPU utilization. Optimal settings for the CPU mask
> seem to depend on architectures and cache hierarcy. Below are some results
> running 500 instances of netperf TCP_RR test with 1 byte req. and resp.
> Results show cumulative transaction rate and system CPU utilization.
>
> e1000e on 8 core Intel
> Without RPS: 108K tps at 33% CPU
> With RPS: 311K tps at 64% CPU
>
> forcedeth on 16 core AMD
> Without RPS: 156K tps at 15% CPU
> With RPS: 404K tps at 49% CPU
>
> bnx2x on 16 core AMD
> Without RPS 567K tps at 61% CPU (4 HW RX queues)
> Without RPS 738K tps at 96% CPU (8 HW RX queues)
> With RPS: 854K tps at 76% CPU (4 HW RX queues)
>
> Caveats:
> - The benefits of this patch are dependent on architecture and cache hierarchy.
> Tuning the masks to get best performance is probably necessary.
> - This patch adds overhead in the path for processing a single packet. In
> a lightly loaded server this overhead may eliminate the advantages of
> increased parallelism, and possibly cause some relative performance degradation.
> We have found that masks that are cache aware (share same caches with
> the interrupting CPU) mitigate much of this.
> - The RPS masks can be changed dynamically, however whenever the mask is changed
> this introduces the possibility of generating out of order packets. It's
> probably best not change the masks too frequently.
>
> Signed-off-by: Tom Herbert <therbert@...gle.com>
>
> include/linux/netdevice.h | 32 ++++-
> include/linux/skbuff.h | 3 +
> net/core/dev.c | 330 +++++++++++++++++++++++++++++++++++++-------
> net/core/net-sysfs.c | 225 ++++++++++++++++++++++++++++++-
> net/core/skbuff.c | 2 +
> 5 files changed, 536 insertions(+), 56 deletions(-)
>
Excellent !
Signed-off-by: Eric Dumazet <eric.dumazet@...il.com>
One last point about placement of rxhash in struct sk_buff, that I
missed in my previous review, sorry...
You put it right before cb[48] which is now aligned to 8 bytes (since
commit da3f5cf1 skbuff: align sk_buff::cb to 64 bit and close some
potential holes), so this adds a 4 bytes hole.
Please put it elsewhere, possibly close to fields that are read in
get_rps_cpu() (skb->queue_mapping, skb->protocol, skb->data, ...) to
minimize number of cache lines that dispatcher cpu has to bring into its
cache, before giving skb to another cpu for IP/TCP processing.
Thanks !
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists