lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1361984703.11403.43.camel@edumazet-glaptop>
Date:	Wed, 27 Feb 2013 09:05:03 -0800
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	David Miller <davem@...emloft.net>
Cc:	netdev <netdev@...r.kernel.org>,
	Neal Cardwell <ncardwell@...gle.com>,
	Tom Herbert <therbert@...gle.com>,
	Yuchung Cheng <ycheng@...gle.com>,
	Andi Kleen <ak@...ux.intel.com>
Subject: [PATCH] tcp: avoid wakeups for pure ACK

From: Eric Dumazet <edumazet@...gle.com>

TCP prequeue mechanism purpose is to let incoming packets
being processed by the thread currently blocked in tcp_recvmsg(),
instead of behalf of the softirq handler, to better adapt flow
control on receiver host capacity to schedule the consumer.

But in typical request/answer workloads, we send request, then
block to receive the answer. And before the actual answer, TCP
stack receives the ACK packets acknowledging the request.

Processing pure ACK on behalf of the thread blocked in tcp_recvmsg()
is a waste of resources, as thread has to immediately sleep again
because it got no payload.

This patch avoids the extra context switches and scheduler overhead.

Before patch :

a:~# echo 0 >/proc/sys/net/ipv4/tcp_low_latency
a:~# perf stat ./super_netperf 300 -t TCP_RR -l 10 -H 7.7.7.84 -- -r 8k,8k
231676

 Performance counter stats for './super_netperf 300 -t TCP_RR -l 10 -H 7.7.7.84 -- -r 8k,8k':

     116251.501765 task-clock                #   11.369 CPUs utilized          
         5,025,463 context-switches          #    0.043 M/sec                  
         1,074,511 CPU-migrations            #    0.009 M/sec                  
           216,923 page-faults               #    0.002 M/sec                  
   311,636,972,396 cycles                    #    2.681 GHz                    
   260,507,138,069 stalled-cycles-frontend   #   83.59% frontend cycles idle   
   155,590,092,840 stalled-cycles-backend    #   49.93% backend  cycles idle   
   100,101,255,411 instructions              #    0.32  insns per cycle        
                                             #    2.60  stalled cycles per insn
    16,535,930,999 branches                  #  142.243 M/sec                  
       646,483,591 branch-misses             #    3.91% of all branches        

      10.225482774 seconds time elapsed

After patch :

a:~# echo 0 >/proc/sys/net/ipv4/tcp_low_latency
a:~# perf stat ./super_netperf 300 -t TCP_RR -l 10 -H 7.7.7.84 -- -r 8k,8k
233297

 Performance counter stats for './super_netperf 300 -t TCP_RR -l 10 -H 7.7.7.84 -- -r 8k,8k':

      91084.870855 task-clock                #    8.887 CPUs utilized          
         2,485,916 context-switches          #    0.027 M/sec                  
           815,520 CPU-migrations            #    0.009 M/sec                  
           216,932 page-faults               #    0.002 M/sec                  
   245,195,022,629 cycles                    #    2.692 GHz                    
   202,635,777,041 stalled-cycles-frontend   #   82.64% frontend cycles idle   
   124,280,372,407 stalled-cycles-backend    #   50.69% backend  cycles idle   
    83,457,289,618 instructions              #    0.34  insns per cycle        
                                             #    2.43  stalled cycles per insn
    13,431,472,361 branches                  #  147.461 M/sec                  
       504,470,665 branch-misses             #    3.76% of all branches        

      10.249594448 seconds time elapsed

Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Cc: Neal Cardwell <ncardwell@...gle.com>
Cc: Tom Herbert <therbert@...gle.com>
Cc: Yuchung Cheng <ycheng@...gle.com>
Cc: Andi Kleen <ak@...ux.intel.com>
---
David : Feel free to postpone this to 3.10.
I'll send a patch to move tcp_prequeue() out of line when net-next opens

 include/net/tcp.h |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index 23f2e98..cf0694d 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1045,6 +1045,10 @@ static inline bool tcp_prequeue(struct sock *sk, struct sk_buff *skb)
 	if (sysctl_tcp_low_latency || !tp->ucopy.task)
 		return false;
 
+	if (skb->len <= tcp_hdrlen(skb) &&
+	    skb_queue_len(&tp->ucopy.prequeue) == 0)
+		return false;
+
 	__skb_queue_tail(&tp->ucopy.prequeue, skb);
 	tp->ucopy.memory += skb->truesize;
 	if (tp->ucopy.memory > sk->sk_rcvbuf) {


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ