[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150813151033.048d73b8@redhat.com>
Date: Thu, 13 Aug 2015 15:10:33 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Phil Sutter <phil@....cc>
Cc: Alexei Starovoitov <alexei.starovoitov@...il.com>,
netdev@...r.kernel.org, brouer@...hat.com
Subject: Re: [net-next PATCH 1/3] net: make default tx_queue_len
configurable
On Thu, 13 Aug 2015 03:13:40 +0200 Phil Sutter <phil@....cc> wrote:
> On Tue, Aug 11, 2015 at 06:13:49PM -0700, Alexei Starovoitov wrote:
> > In general 'changing the default' may be an acceptable thing, but then
> > it needs to strongly justified. How much performance does it bring?
>
> A quick test on my local VM with veth and netperf (netserver and veth
> peer in different netns) I see an increase of about 5% of throughput
> when using noqueue instead of the default pfifo_fast.
Good that you can show 5% improvement with a single netperf flow. We
are saving approx 6 atomic operations avoiding the qdisc code path.
This fixes a scalability issue with veth. Thus, the real performance
boost will happen with multiple flows and multiple CPU cores in
action. You can try with a multi core VM and use super_netperf.
https://github.com/borkmann/stuff/blob/master/super_netperf
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Sr. Network Kernel Developer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists