[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA93jw5qnx6M_p8cxKfOhMt2LYtkxzQXu8EHOktw_15+ZDCnig@mail.gmail.com>
Date: Thu, 25 Sep 2014 06:52:21 -0700
From: Dave Taht <dave.taht@...il.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: Jamal Hadi Salim <jhs@...atatu.com>,
Eric Dumazet <eric.dumazet@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Tom Herbert <therbert@...gle.com>,
"David S. Miller" <davem@...emloft.net>,
Alexander Duyck <alexander.h.duyck@...el.com>,
Toke Høiland-Jørgensen <toke@...e.dk>,
Florian Westphal <fw@...len.de>,
John Fastabend <john.r.fastabend@...el.com>,
Daniel Borkmann <dborkman@...hat.com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>
Subject: Re: [net-next PATCH 1/1 V4] qdisc: bulk dequeue support for qdiscs
with TCQ_F_ONETXQUEUE
On Thu, Sep 25, 2014 at 1:25 AM, Jesper Dangaard Brouer
<brouer@...hat.com> wrote:
> On Wed, 24 Sep 2014 18:13:57 -0400
> Jamal Hadi Salim <jhs@...atatu.com> wrote:
>
>> On 09/24/14 13:58, Jesper Dangaard Brouer wrote:
>> > On Wed, 24 Sep 2014 10:23:15 -0700
>> > Eric Dumazet <eric.dumazet@...il.com> wrote:
>> >
>>
>> >
>> >> pktgen is nice, but do not represent the majority of the traffic we send
>> >> from high performance host where we want this bulk dequeue thing ;)
>> >
>> > This patch is actually targetted towards more normal use-cases.
>> > Pktgen cannot even use this work, as it bypass the qdisc layer...
>>
>> When you post these patches - can you please also post basic performance
>> numbers? You dont have to show improvement if it is hard for bulking
>> to kick in, but you need to show no harm in at least latency for the
>> general use case (i.e not pktgen maybe forwarding activity or something
>> sourced from tcp).
>
> I've done measurements with netperf-wrapper:
> http://netoptimizer.blogspot.dk/2014/09/mini-tutorial-for-netperf-wrapper-setup.html
>
> I have already previously posted my measurements here:
> http://people.netfilter.org/hawk/qdisc/
> http://people.netfilter.org/hawk/qdisc/measure01/
> http://people.netfilter.org/hawk/qdisc/experiment01/
>
> Please, see my previous mail where I described each graph.
Stuff like this:
http://people.netfilter.org/hawk/qdisc/experiment01/compare_TSO_vs_TSO_with_rxusec30__rr_latency.png
will no doubt make many a high speed trader happy.
My point being was that by repeating this experiment for each
successive change (eric's 1/2 BQL patch, better batching, sch_fq,
different ethernet drivers, etc),
you (or people duplicating the experiment) can produce
ongoing comparison plots...
> The above measurements is for 10Gbit/s, but I've also done measurements
> on 1Gbit/s driver igb, and 10Mbit/s by forcing igb to use 10Mbit/s.
> Those results I forgot upload (and I cannot upload them right now, as
> I'm currently in Switzerland).
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Sr. Network Kernel Developer at Red Hat
> Author of http://www.iptv-analyzer.org
> LinkedIn: http://www.linkedin.com/in/brouer
--
Dave Täht
https://www.bufferbloat.net/projects/make-wifi-fast
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists