[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090706185353.GY5480@parisc-linux.org>
Date: Mon, 6 Jul 2009 12:53:53 -0600
From: Matthew Wilcox <matthew@....cx>
To: "Ma, Chinang" <chinang.ma@...el.com>
Cc: Rick Jones <rick.jones2@...com>,
Herbert Xu <herbert@...dor.apana.org.au>,
Jeff Garzik <jeff@...zik.org>,
"andi@...stfloor.org" <andi@...stfloor.org>,
"arjan@...radead.org" <arjan@...radead.org>,
"jens.axboe@...cle.com" <jens.axboe@...cle.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Styner, Douglas W" <douglas.w.styner@...el.com>,
"Prickett, Terry O" <terry.o.prickett@...el.com>,
"Wilcox, Matthew R" <matthew.r.wilcox@...el.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"Brandeburg, Jesse" <jesse.brandeburg@...el.com>
Subject: Re: >10% performance degradation since 2.6.18
On Mon, Jul 06, 2009 at 11:48:47AM -0700, Ma, Chinang wrote:
> On the distributing interrupt load subject. Can we do the same thing with the IOC interrupts. We have 4 LSI 3801, The number of i/o interrupt is huge on 4 of the cpus. Is there a way to divide the IOC irq number so we can spread out the i/o interrupt to more cpu?
I think the LSI 3801 only has one interrupt per ioc, but they could
perhaps be better spread-out. Right now, they're delivering interrupts
to CPUs 2, 3, 4 and 5. Possibly spreading them out to CPUs 2, 6, 10
and 14 would help. Or maybe it would hurt ...
(nb: the ioc interrupts are similarly tied to CPUs 2, 3, 4 and 5 with
2.6.18, so this isn't a likely cause of/solution to the regression,
it may just be a path to better numbers).
--
Matthew Wilcox Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists