lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 06 Apr 2010 15:04:56 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Tom Herbert <therbert@...gle.com>
Cc:	davem@...emloft.com, netdev@...r.kernel.org
Subject: Re: [PATCH v2] rfs: Receive Flow Steering

Le lundi 05 avril 2010 à 22:56 -0700, Tom Herbert a écrit :
> Version 2:
> - added a u16 filler to pad rps_dev_flow structure
> - define RPS_NO_CPU as 0xffff
> - add inet_rps_save_rxhash helper function to copy skb's rxhash into inet_sk
> - add a "voidflow" which can be used get_rps_cpu does not return a flow (avoids some conditionals)
> - use raw_smp_processor_id in rps_record_sock_flow, this is no requirement to pr
> event preemption
> ---
> This patch implements software receive side packet steering (RPS).  RPS
> distributes the load of received packet processing across multiple CPUs.
> 
> Problem statement: Protocol processing done in the NAPI context for received
> packets is serialized per device queue and becomes a bottleneck under high
> packet load.  This substantially limits pps that can be achieved on a single
> queue NIC and provides no scaling with multiple cores.
> 
> This solution queues packets early on in the receive path on the backlog queues
> of other CPUs.   This allows protocol processing (e.g. IP and TCP) to be
> performed on packets in parallel.   For each device (or each receive queue in
> a multi-queue device) a mask of CPUs is set to indicate the CPUs that can
> process packets. A CPU is selected on a per packet basis by hashing contents
> of the packet header (e.g. the TCP or UDP 4-tuple) and using the result to index
> into the CPU mask.  The IPI mechanism is used to raise networking receive
> softirqs between CPUs.  This effectively emulates in software what a multi-queue
> NIC can provide, but is generic requiring no device support.
> 
> Many devices now provide a hash over the 4-tuple on a per packet basis
> (e.g. the Toeplitz hash).  This patch allow drivers to set the HW reported hash
> in an skb field, and that value in turn is used to index into the RPS maps.
> Using the HW generated hash can avoid cache misses on the packet when
> steering it to a remote CPU.
> 
> The CPU mask is set on a per device and per queue basis in the sysfs variable
> /sys/class/net/<device>/queues/rx-<n>/rps_cpus.  This is a set of canonical
> bit maps for receive queues in the device (numbered by <n>).  If a device
> does not support multi-queue, a single variable is used for the device (rx-0).
> 
> Generally, we have found this technique increases pps capabilities of a single
> queue device with good CPU utilization.  Optimal settings for the CPU mask
> seem to depend on architectures and cache hierarcy.  Below are some results
> running 500 instances of netperf TCP_RR test with 1 byte req. and resp.
> Results show cumulative transaction rate and system CPU utilization.
> 
> e1000e on 8 core Intel
>    Without RPS: 108K tps at 33% CPU
>    With RPS:    311K tps at 64% CPU
> 
> forcedeth on 16 core AMD
>    Without RPS: 156K tps at 15% CPU
>    With RPS:    404K tps at 49% CPU
>    
> bnx2x on 16 core AMD
>    Without RPS  567K tps at 61% CPU (4 HW RX queues)
>    Without RPS  738K tps at 96% CPU (8 HW RX queues)
>    With RPS:    854K tps at 76% CPU (4 HW RX queues)
> 
> Caveats:
> - The benefits of this patch are dependent on architecture and cache hierarchy.
> Tuning the masks to get best performance is probably necessary.
> - This patch adds overhead in the path for processing a single packet.  In
> a lightly loaded server this overhead may eliminate the advantages of
> increased parallelism, and possibly cause some relative performance degradation.
> We have found that masks that are cache aware (share same caches with
> the interrupting CPU) mitigate much of this.
> - The RPS masks can be changed dynamically, however whenever the mask is changed
> this introduces the possibility of generating out of order packets.  It's
> probably best not change the masks too frequently.
> 
> Signed-off-by: Tom Herbert <therbert@...gle.com>
> ---

Running on a preprod machine here, seems fine.

Some questions :

1) The need to add "rps_flow_entries=xxx" at boot time is problematic.
   Maybe we can allow it being dynamic (and use vmalloc() instead of
alloc_large_system_hash())

2) inet_rps_save_rxhash(sk, skb->rxhash);

	It should have a check to make sure some part of the stack doesnt feed
many different rxhash for a given socket (Make sure we dont pollute flow
table with pseudo random values)

3) UDP connected sockets dont benefit of RFS currently
   (Not sure many apps use connected UDP sockets, I do have some of them
in house)

I am trying following code for IPV4 only :

diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 7af756d..5c2d37a 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1216,6 +1216,7 @@ int udp_disconnect(struct sock *sk, int flags)
 	sk->sk_state = TCP_CLOSE;
 	inet->inet_daddr = 0;
 	inet->inet_dport = 0;
+	inet_rps_save_rxhash(sk, 0);
 	sk->sk_bound_dev_if = 0;
 	if (!(sk->sk_userlocks & SOCK_BINDADDR_LOCK))
 		inet_reset_saddr(sk);
@@ -1257,8 +1258,12 @@ EXPORT_SYMBOL(udp_lib_unhash);
 
 static int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 {
-	int rc = sock_queue_rcv_skb(sk, skb);
+	int rc;
+
+	if (inet_sk(sk)->inet_daddr)
+		inet_rps_save_rxhash(sk, skb->rxhash);
 
+	rc = sock_queue_rcv_skb(sk, skb);
 	if (rc < 0) {
 		int is_udplite = IS_UDPLITE(sk);
 


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ