lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 04 Sep 2013 14:30:52 +0800
From:	Jason Wang <jasowang@...hat.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	David Miller <davem@...emloft.net>,
	netdev <netdev@...r.kernel.org>,
	Yuchung Cheng <ycheng@...gle.com>,
	Neal Cardwell <ncardwell@...gle.com>,
	"Michael S. Tsirkin" <mst@...hat.com>
Subject: Re: [PATCH v2 net-next] pkt_sched: fq: Fair Queue packet scheduler

On 09/04/2013 01:59 PM, Eric Dumazet wrote:
> On Wed, 2013-09-04 at 13:26 +0800, Jason Wang wrote:
>
>> I see both degradation and jitter when using fq with virtio-net. Guest
>> to guest performance drops from 8Gb/s to 3Gb/s-7Gb/s. Guest to local
>> host drops from 8Gb/s to 4Gb/s-6Gb/s. Guest to external host with ixgbe
>> drops from 9Gb/s to 7Gb/s
>>
>> I didn't meet the issue when using sfq or disabling pacing.
>>
>> So it looks like it was caused by the inaccuracy and jitter of the
>> pacing estimation in a virt guest?
> Well, using virtio-net means you use FQ without pacing.
>
> Make sure you do not have reorders because of a bug in queue selection.

I test with only one queue enabled, so should not have this problem.
>
> TCP stack has the ooo_okay thing, I do not think a VM can get it.
>
> nstat >/dev/null ; <your test> ; nstat
This is result of guest to guest:

using sfq:

nstat > /dev/null; netperf -H 192.168.100.5; nstat
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.100.5 () port 0 AF_INET : demo
Recv   Send    Send                         
Socket Socket  Message  Elapsed             
Size   Size    Size     Time     Throughput 
bytes  bytes   bytes    secs.    10^6bits/sec 

 87380  16384  16384    10.01    9155.88  
#kernel
IpInReceives                    130989             0.0
IpInDelivers                    130989             0.0
IpOutRequests                   176518             0.0
TcpActiveOpens                  2                  0.0
TcpInSegs                       130989             0.0
TcpOutSegs                      7908396            0.0
TcpExtDelayedACKs               1                  0.0
TcpExtTCPPureAcks               60997              0.0
TcpExtTCPHPAcks                 69985              0.0
IpExtInOctets                   6813412            0.0
IpExtOutOctets                  11460499000        0.0
IpExtInNoECTPkts                130989             0.0

using fq:
nstat > /dev/null; netperf -H 192.168.100.5; nstat
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.100.5 () port 0 AF_INET : demo
Recv   Send    Send                         
Socket Socket  Message  Elapsed             
Size   Size    Size     Time     Throughput 
bytes  bytes   bytes    secs.    10^6bits/sec 

 87380  16384  16384    10.00    6340.29  
#kernel
IpInReceives                    121595             0.0
IpInDelivers                    121595             0.0
IpOutRequests                   121763             0.0
TcpActiveOpens                  2                  0.0
TcpInSegs                       121595             0.0
TcpOutSegs                      5474944            0.0
TcpExtTW                        2                  0.0
TcpExtDelayedACKs               1                  0.0
TcpExtTCPPureAcks               50946              0.0
TcpExtTCPHPAcks                 70642              0.0
IpExtInOctets                   6324924            0.0
IpExtOutOctets                  7934016612         0.0
IpExtInNoECTPkts                121595             0.0

>
> And tcpdump would certainly help ;)

See attachment.

Thanks

View attachment "fq_out" of type "text/plain" (47989 bytes)

Powered by blists - more mailing lists