lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5227FBA9.8030604@redhat.com>
Date:	Thu, 05 Sep 2013 11:34:01 +0800
From:	Jason Wang <jasowang@...hat.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	David Miller <davem@...emloft.net>,
	netdev <netdev@...r.kernel.org>,
	Yuchung Cheng <ycheng@...gle.com>,
	Neal Cardwell <ncardwell@...gle.com>,
	"Michael S. Tsirkin" <mst@...hat.com>
Subject: Re: [PATCH v2 net-next] pkt_sched: fq: Fair Queue packet scheduler

On 09/04/2013 07:27 PM, Eric Dumazet wrote:
> On Wed, 2013-09-04 at 03:30 -0700, Eric Dumazet wrote:
>> > On Wed, 2013-09-04 at 14:30 +0800, Jason Wang wrote:
>> > 
>>>> > > > And tcpdump would certainly help ;)
>>> > > 
>>> > > See attachment.
>>> > > 
>> > 
>> > Nothing obvious on tcpdump (only that lot of frames are missing)
>> > 
>> > 1) Are you capturing part of the payload only (like tcpdump -s 128)
>> > 
>> > 2) What is the setup.
>> > 
>> > 3) tc -s -d qdisc
> If you use FQ in the guest, then it could be that high resolution timers
> have high latency ?

Not sure, but it should not affect so much. And I'm using kvm-clock in
guest whose overhead should be very small.
>
> So FQ arms short timers, but effective duration could be much longer.
>
> Here I get a smooth latency of up to ~3 us
>
> lpq83:~# ./netperf -H lpq84 ; ./tc -s -d qd ; dmesg | tail -n1
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to lpq84.prod.google.com () port 0 AF_INET
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
>
>  87380  16384  16384    10.00    9410.82   
> qdisc fq 8005: dev eth0 root refcnt 32 limit 10000p flow_limit 100p buckets 1024 quantum 3028 initial_quantum 15140 
>  Sent 50545633991 bytes 33385894 pkt (dropped 0, overlimits 0 requeues 19) 
>  rate 9258Mbit 764335pps backlog 0b 0p requeues 19 
>   117 flow, 115 inactive, 0 throttled
>   0 gc, 0 highprio, 0 retrans, 96861 throttled, 0 flows_plimit
> [  572.551664] latency = 3035 ns
>
>
> What do you get with this debugging patch ?

I'm getting about 13us-19us, one run like:

 netperf -H 192.168.100.5; tc -s -d qd; dmesg | tail -n1
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.100.5 () port 0 AF_INET : demo
Recv   Send    Send                         
Socket Socket  Message  Elapsed             
Size   Size    Size     Time     Throughput 
bytes  bytes   bytes    secs.    10^6bits/sec 

 87380  16384  16384    10.00    4542.09  
qdisc fq 8001: dev eth0 root refcnt 2 [Unknown qdisc, optlen=64]
 Sent 53652327205 bytes 35580150 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
[  201.320565] latency = 14905 ns

One interesting thing is if I switch from kvm-clock to acpi_pm which has
much more overhead, the latency increase to about 50ns, and the
throughput drops very quickly.
netperf -H 192.168.100.5; tc -s -d qd; dmesg | tail -n1
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.100.5 () port 0 AF_INET : demo
Recv   Send    Send                         
Socket Socket  Message  Elapsed             
Size   Size    Size     Time     Throughput 
bytes  bytes   bytes    secs.    10^6bits/sec 

 87380  16384  16384    10.00    2262.46  
qdisc fq 8001: dev eth0 root refcnt 2 [Unknown qdisc, optlen=64]
 Sent 56611533075 bytes 37550429 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
[  474.121689] latency = 51841 ns
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ