[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAA93jw7Q4+NqvsShEUG7b2Nu8F8dFMsesZZTFg=npAtfxKhOmg@mail.gmail.com>
Date: Mon, 13 Oct 2014 13:47:14 -0700
From: Dave Taht <dave.taht@...il.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
Alexander Duyck <alexander.duyck@...il.com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
John Fastabend <john.r.fastabend@...el.com>,
Jamal Hadi Salim <jhs@...atatu.com>,
Daniel Borkmann <dborkman@...hat.com>,
Florian Westphal <fw@...len.de>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Toke Høiland-Jørgensen <toke@...e.dk>,
Tom Herbert <therbert@...gle.com>,
David Miller <davem@...emloft.net>
Subject: Re: Network optimality (was Re: [PATCH net-next] qdisc: validate skb
without holding lock_
On Mon, Oct 13, 2014 at 1:27 PM, Jesper Dangaard Brouer
<brouer@...hat.com> wrote:
>
> On Mon, 13 Oct 2014 10:20:17 -0700 Dave Taht <dave.taht@...il.com> wrote:
>
>> On Mon, Oct 13, 2014 at 9:58 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>> >
>> >> On Oct 13, 2014 7:22 AM, "Dave Taht" <dave.taht@...il.com> wrote:
> [...]
>>
>> I would like to also get better behavior out of gigE and below, and for
>> these changes to not impact the downstream behavior of the network
>> overall.
>
> I also care about 1Gbit/s and below, that why I did some many tests
> (with igb at 10Mbit/s, 100Mbit/s and 1Gbit/s).
>
>
>> To give you an example, I would like to see the tcp flows in the
>> 2nd chart here, to converge faster than the 5 seconds they currently
>> take at GigE speeds.
>>
>> http://snapon.lab.bufferbloat.net/~cero2/nuc-to-puck/results.html
>
> In the last graph, where you cannot saturate the link, because you
> turned off GSO, GRO and TSO. Here I expect you will see the benefit of
> the qdisc bulking. That is, will be able to saturate the link and
> achieve the lower latency as BQL will cut off the bursts at +1 MTU.
> I would be interested in the results...
I am too!!!! 5 seconds to converge? 50x the baseline latency when under load?
vs not being able to saturate the link at all? Ugh. Two lousy choices.
I think xmit_more will
help a lot in the latter case, my other suggestions regarding reducing
the size of the offloads in the former.
But it looks like xmit_more support needs to be added to fq, and fq_codel (?),
and despite me reading the patches submitted thus far, it would be saner
for someone else to patch e1000e support for the nuc (and the zillions
of other e1000e platforms)
(did I miss that patch go by?)
I'm certainly willing to test the result on that platform (and I have
some other tweaks
in my queue at the qdisc layer that I can throw in, also).
>> > We made all these changes so that we can spend cpu cycles at the right
>> > place.
>
> Exactly.
+1.
So what happens when more cpu cycles are available in the right place?
The dequeue routines in both fq and fq_codel are a bit more complex than
pfifo_fast (and I've longed to kill the maxpacket concept in codel, btw)....
y'all are in such a lovely place with profilers and hardware at the ready
to just make a simple sysctl and analyze what happens at rates I can
only dream of.
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Sr. Network Kernel Developer at Red Hat
> Author of http://www.iptv-analyzer.org
> LinkedIn: http://www.linkedin.com/in/brouer
--
Dave Täht
https://www.bufferbloat.net/projects/make-wifi-fast
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists