[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070525054422.GA23748@colo.lackof.org>
Date: Thu, 24 May 2007 23:44:22 -0600
From: Grant Grundler <grundler@...isc-linux.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: "Eric W. Biederman" <ebiederm@...ssion.com>,
Greg Kroah-Hartman <gregkh@...e.de>,
Jay Cliburn <jacliburn@...lsouth.net>,
Grzegorz Krzystek <ninex@...eX.eu.org>,
Andi Kleen <ak@...e.de>, ninex@...pl,
linux-kernel@...r.kernel.org, linux-pci@...ey.karlin.mff.cuni.cz,
Michael Ellerman <michael@...erman.id.au>,
David Miller <davem@...emloft.net>,
Tony Luck <tony.luck@...el.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
On Thu, May 24, 2007 at 09:31:57PM -0700, Andrew Morton wrote:
> On Thu, 24 May 2007 22:19:09 -0600 ebiederm@...ssion.com (Eric W. Biederman) wrote:
>
> > Currently we blacklist known bad msi configurations which means we
> > keep getting MSI enabled on chipsets that either do not support MSI,
> > or MSI is implemented improperly. Since the normal IRQ routing
> > mechanism seems to works even when MSI does not, this is a bad default
> > and causes non-functioning systems for no good reason.
...
> Yup.
>
> Do we have a feel for how much performace we're losing on those
> systems which _could_ do MSI, but which will end up defaulting
> to not using it?
Rick Jones (HP, aka Mr Netperf.org) just recently posted some data
that happened to compare. I've clipped out thw two relevant lines below:
http://lists.openfabrics.org/pipermail/general/2007-May/035709.html
Bulk Transfer "Latency"
Unidir Bidir
Card Mbit/s SDx SDr Mbit/s SDx SDr Tran/s SDx SDr
---------------------------------------------------------------------------
Myri10G IP 9k 9320 0.862 0.949 10950 1.00 0.86 19260 19.67 16.18 *
Myri10G IP 9k msi 9320 0.449 0.672 10840 0.63 0.62 19430 11.68 11.56
original posting explains the fields.
SDx (Service Demand on Transmit) is 2x more with MSI disabled.
SDr (Service Demand on RX) is ~50% higher with MSI disabled.
Ditto for latency metrics.
ISTR to remember seeing ~5-10% difference on tg3 NICs and ~20% with PCI-X
infiniband (all on HP ZX1 chip, bottleneck was PCI-X bus). When I posted
a tg3 patch to linux-net (which got rejected because of tg3 HW bugs), I
did NOT include any performance numbers like I thought I did. :(
hth,
grant
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists