[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1243098211.4217.9.camel@obelisk.thedillows.org>
Date: Sat, 23 May 2009 13:03:31 -0400
From: David Dillow <dave@...dillows.org>
To: Michael Riepe <michael.riepe@...glemail.com>
Cc: Michael Buesch <mb@...sch.de>,
Francois Romieu <romieu@...zoreil.com>,
Rui Santos <rsantos@...popie.com>,
Michael Büker <m.bueker@...lin.de>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH 2.6.30-rc4] r8169: avoid losing MSI interrupts
On Sat, 2009-05-23 at 18:53 +0200, Michael Riepe wrote:
>
> David Dillow wrote:
> > On Sat, 2009-05-23 at 18:12 +0200, Michael Riepe wrote:
> >
> >>If I use two connections (iperf -P2) and nail iperf to both threads of a
> >>single core with taskset (the program is multi-threaded, just in case
> >>you wonder), I get this:
> >>
> >>CPU 0+2: 0.0-60.0 sec 4.65 GBytes 665 Mbits/sec
> >>CPU 1+3: 0.0-60.0 sec 6.43 GBytes 920 Mbits/sec
> >>
> >>That's quite a difference, isn't it?
> >>
> >>Now I wonder what CPU 0 is doing...
> >
> >
> > Where does /proc/interrupts say the irqs are going?
>
> Oh well...
> 27: 48463995 0 0 0 PCI-MSI-edge eth0
What does it look like if you move the iperf around the CPUs while using
pci=nomsi?
I'm looking to make sure I didn't cause a terrible regression in the
cost of IRQ handling...
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists