[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140925102505.494acab1@redhat.com>
Date: Thu, 25 Sep 2014 10:25:05 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Jamal Hadi Salim <jhs@...atatu.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org,
therbert@...gle.com, "David S. Miller" <davem@...emloft.net>,
Alexander Duyck <alexander.h.duyck@...el.com>, toke@...e.dk,
Florian Westphal <fw@...len.de>,
Dave Taht <dave.taht@...il.com>,
John Fastabend <john.r.fastabend@...el.com>,
Daniel Borkmann <dborkman@...hat.com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
brouer@...hat.com
Subject: Re: [net-next PATCH 1/1 V4] qdisc: bulk dequeue support for qdiscs
with TCQ_F_ONETXQUEUE
On Wed, 24 Sep 2014 18:13:57 -0400
Jamal Hadi Salim <jhs@...atatu.com> wrote:
> On 09/24/14 13:58, Jesper Dangaard Brouer wrote:
> > On Wed, 24 Sep 2014 10:23:15 -0700
> > Eric Dumazet <eric.dumazet@...il.com> wrote:
> >
>
> >
> >> pktgen is nice, but do not represent the majority of the traffic we send
> >> from high performance host where we want this bulk dequeue thing ;)
> >
> > This patch is actually targetted towards more normal use-cases.
> > Pktgen cannot even use this work, as it bypass the qdisc layer...
>
> When you post these patches - can you please also post basic performance
> numbers? You dont have to show improvement if it is hard for bulking
> to kick in, but you need to show no harm in at least latency for the
> general use case (i.e not pktgen maybe forwarding activity or something
> sourced from tcp).
I've done measurements with netperf-wrapper:
http://netoptimizer.blogspot.dk/2014/09/mini-tutorial-for-netperf-wrapper-setup.html
I have already previously posted my measurements here:
http://people.netfilter.org/hawk/qdisc/
http://people.netfilter.org/hawk/qdisc/measure01/
http://people.netfilter.org/hawk/qdisc/experiment01/
Please, see my previous mail where I described each graph.
The above measurements is for 10Gbit/s, but I've also done measurements
on 1Gbit/s driver igb, and 10Mbit/s by forcing igb to use 10Mbit/s.
Those results I forgot upload (and I cannot upload them right now, as
I'm currently in Switzerland).
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Sr. Network Kernel Developer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists