[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200905070046.27551.denys@visp.net.lb>
Date: Thu, 7 May 2009 00:46:27 +0300
From: Denys Fedoryschenko <denys@...p.net.lb>
To: Vladimir Ivashchenko <hazard@...ncoudi.com>
Cc: netdev@...r.kernel.org
Subject: Re: bond + tc regression ?
On Wednesday 06 May 2009 23:47:59 Vladimir Ivashchenko wrote:
> On Wed, May 06, 2009 at 10:30:04PM +0300, Denys Fedoryschenko wrote:
> > > What's interesting, the same 850mbps load, identical machine, but with
> > > only two NICs and no bond, HTB+esfq, kernel 2.6.21.2 => 60% CPU idle.
> > > 2.5x overhead.
> >
> > Probably oprofile can sched some light on this.
> > On my own experience IRQ balancing hurt performance a lot, because of
> > cache misses.
>
> This is a dual-core machine, isn't cache shared between the cores?
>
> Without IRQ balancing, one of the cores goes around 10% idle and HTB
> doesn't do its job properly. Actually, in my experience HTB stops working
> properly after idle goes below 35%.
It seems they should. No idea, more experienced guys should know more.
Can you show me please
cat /proc/net/psched
If it is highres working, try to add in HTB script, first line
HZ=1000
to set environment variable. Because if clock resolution high, burst
calculation going crazy on high speeds.
Maybe it will help.
Also without irq balance, did you try to assign interface to cpu by
smp_affinity? (/proc/irq/NN/smp_affinity)
And still i think best thing is oprofile. It can show "hot" places in code,
who is spending cpu cycles.
>
> I'll try gathering some stats using oprofile.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists