[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090706174203.GV5480@parisc-linux.org>
Date: Mon, 6 Jul 2009 11:42:03 -0600
From: Matthew Wilcox <matthew@....cx>
To: "Ma, Chinang" <chinang.ma@...el.com>
Cc: Rick Jones <rick.jones2@...com>,
Herbert Xu <herbert@...dor.apana.org.au>,
Jeff Garzik <jeff@...zik.org>,
"andi@...stfloor.org" <andi@...stfloor.org>,
"arjan@...radead.org" <arjan@...radead.org>,
"jens.axboe@...cle.com" <jens.axboe@...cle.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Styner, Douglas W" <douglas.w.styner@...el.com>,
"Prickett, Terry O" <terry.o.prickett@...el.com>,
"Wilcox, Matthew R" <matthew.r.wilcox@...el.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
jesse.brandeburg@...el.com
Subject: Re: >10% performance degradation since 2.6.18
On Mon, Jul 06, 2009 at 10:36:11AM -0700, Ma, Chinang wrote:
> For OLTP workload we are not pushing much network throughput. Lower network latency is more important for OLTP performance. For the original Nehalem 2 sockets OLTP result in this mail thread, we bound the two NIC interrupts to cpu1 and cpu9 (one NIC per sockets). Database processes are divided into two groups and pinned to socket and each processe only received request from the NIC it bound to. This binding scheme gave us >1% performance boost pre-Nehalem date. We also see positive impact on this NHM system.
So you've tried spreading the four RX and TX interrupts for each card
out over, say, CPUs 1, 3, 5, 7 for eth1 and then 9, 11, 13, 15 for eth0,
and it produces worse performance than having all four tied to CPUs 1
and 9? Interesting.
Can you try changing IGB_MAX_RX_QUEUES (in drivers/net/igb/igb.h, about
line 60) to 1, and seeing if performance improves that way?
--
Matthew Wilcox Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists