[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA93jw6g1se3pfC8qzvDKnp9N8x0wRhdBActr0Og8rHaHvdAMQ@mail.gmail.com>
Date: Wed, 4 Jan 2012 08:56:05 +0100
From: Dave Taht <dave.taht@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Michal Kubeček <mkubecek@...e.cz>,
netdev@...r.kernel.org,
"John A. Sullivan III" <jsullivan@...nsourcedevel.com>
Subject: Re: [RFC] SFQ planned changes
On Wed, Jan 4, 2012 at 1:14 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Le mercredi 04 janvier 2012 à 00:57 +0100, Dave Taht a écrit :
>> On Tue, Jan 3, 2012 at 5:08 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>> > Here is the code I ran on my test server with 200 netperf TCP_STREAM
>> > flows with pretty good results (each flow gets 0.5 % of bandwidth)
>>
>> Can I encourage you to always simultaneously run a fping and/or a
>> netperf -t TCP_RR
>>
So I sat down and setup something that could do gigE and exercise
everything I had lying around worth playing with to see what crashed...
> a ping on idle link :
> # ping -c 20 192.168.20.112
> PING 192.168.20.112 (192.168.20.112) 56(84) bytes of data.
> 64 bytes from 192.168.20.112: icmp_req=1 ttl=64 time=0.119 ms
> 64 bytes from 192.168.20.112: icmp_req=2 ttl=64 time=0.090 ms
> 64 bytes from 192.168.20.112: icmp_req=3 ttl=64 time=0.085 ms
> 64 bytes from 192.168.20.112: icmp_req=4 ttl=64 time=0.087 ms
I find puzzling that my baseline ping time is nearly 3x yours.
I guess this is the price I pay for a 680mhz box on the other end.
My baseline ping (1 hop e1000e to router)
64 bytes from 172.30.50.1: icmp_req=18 ttl=64 time=0.239 ms
64 bytes from 172.30.50.1: icmp_req=19 ttl=64 time=0.247 ms
64 bytes from 172.30.50.1: icmp_req=20 ttl=64 time=0.301 ms
(or, in my data format)
|T|172.30.50.1 |172.30.47.1 |172.30.47.27 |
|-+-+-+-+|
|1|0.34|0.63|0.59|
|2|0.28|0.42|0.45|
|3|0.39|0.41|0.48|
|4|0.37|0.42|0.51|
|5|0.33|0.43|0.49|
your load test:
> # ping -c 20 192.168.20.112
> PING 192.168.20.112 (192.168.20.112) 56(84) bytes of data.
> 64 bytes from 192.168.20.112: icmp_req=1 ttl=64 time=0.251 ms
> 64 bytes from 192.168.20.112: icmp_req=2 ttl=64 time=0.123 ms
> 64 bytes from 192.168.20.112: icmp_req=3 ttl=64 time=0.124 ms
This was my complex qfq/sfq test that ran all night (somehow), at gigE.
STAQFQ is on on the source laptop, 100 iperfs in play, 600 seconds
at a time, net transfer rate
about 250Mbit.... - and I rate limited BQL to 9000 limit_max.
GSO/TSO are off throughout
(STAQFQ is 514 QFQ bins, 24 pfifo_fast qdiscs per)
first router has staqfq on the external interface connected to laptop #1
sfq on the internal interface connected to router #2
router #2 had sfq on it's external interface and internal interface
laptop #2 had pfifo_fast on it
|count|e1000e to router|to next router|through next router's switch to
laptop #2|
|100|0.40|0.42|0.57|
|101|0.49|0.48|0.54|
|102|0.59|0.65|0.73|
|103|0.48|0.59|0.83|
|104|0.36|0.56|0.75|
|105|0.51|0.63|0.66|
|106|0.41|0.60|0.40|
|107|0.62|0.44|0.81|
|108|0.33|0.36|0.79|
|109|0.49|0.49|0.49|
|110|0.48|0.42|0.54|
Three notes of interest while I sort through this:
1) I saw spikes of up to 12ms with BQL's limiter enabled at one point
or another.
I'll try to duplicate that.
2) I did manage to crash QFQ multiple times earlier in the night
(on every interface that has sfq on it now)
3) And when the ping ends up in the wrong bin, the results can be interesting.
|125|0.56|98.91|0.55|
|126|0.41|96.54|0.52|
|127|0.35|96.11|0.91|
|128|0.23|106.52|0.57|
|129|0.42|104.01|0.83|
|130|0.44|105.92|0.59|
4) there was packet loss (yea!) and many other anomalies, I ran each test for
600 seconds, need to look at the actual data transferred, and will try
a plot later.
But I can say the 2 day old SFQ stuff stands up to a load test...
And QFQ can do pretty well, too, when not crashing...
I will get to your new patch set over the weekend.
--
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
FR Tel: 0638645374
http://www.bufferbloat.net
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists