lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 20 Mar 2009 10:27:17 -0700 (Pacific Daylight Time)
From:	"Brandeburg, Jesse" <jesse.brandeburg@...el.com>
To:	Terry <hanfang@...du.com>
cc:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	jesse.brandeburg@...el.com
Subject: Re: performance issue of netdev_budget and dev_weight with ixgbe

On Fri, 20 Mar 2009, Terry wrote:
> I was doing some tuning work for ip-forwarding with oplin cards .
> i found that when i set netdev_budget to 64(default is 300),just the 

netdev_budget controls how many times poll can be called for multiple 
devices on a single cpu.

> same as  dev_weight in ixgbe, i got the highest forwarding rate.
> not only in forwarding scenario, but also the  "receive and send back" 
> scenario.

you're changing the scheduling (scheduler interaction) behavior by 
decreasing netdev_budget.  You're also affecting the fairness between two 
interfaces that might be running NAPI on the same CPU.

> If I change  netdev_budget or weight in driver to any other value, the 
> performace get worse.
> how did the ixgbe driver folks pick  the  value of 64 and how does it 
> affect the perf?

64 is pretty much the global default for all drivers, not just ixgbe, we 
didn't "pick" it at all.  at 10 Gb Ethernet speeds, we get a LOT of 
packets.  When you play with budget you're decreasing the amount of cache 
coherency you get by handing lots of packets at once.

I think if you look at the time_squeeze counter in /proc/net/softnet_stat 
you'll see that when routing we often take more than a jiffie, which means 
that next time through the poll loop our budget is smaller.  You might 
want to add code to check what the minimum value that the budget passed to 
ixgbe is.  If it gets too small all your cpu time is spent thrashing 
between scheduling NAPI and never getting much work done.

The per-packet cost for routing is so high when merging all the transmit 
work into the netif_receive_skb code, that I bet 64 packets often exceeds 
a jiffie.

This is probably an area that the kernel stack could improve by batching 
packets on receive (possibly similar to what yanmin zhang has been posting 
recently) especially when routing.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ