[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091203165004.GA14665@sgi.com>
Date: Thu, 3 Dec 2009 10:50:04 -0600
From: Dimitri Sivanich <sivanich@....com>
To: Arjan van de Ven <arjan@...radead.org>
Cc: "Eric W. Biederman" <ebiederm@...ssion.com>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>,
Suresh Siddha <suresh.b.siddha@...el.com>,
Yinghai Lu <yinghai@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Jesse Barnes <jbarnes@...tuousgeek.org>,
David Miller <davem@...emloft.net>,
Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@...el.com>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH v6] x86/apic: limit irq affinity
On Wed, Nov 25, 2009 at 07:40:33AM -0800, Arjan van de Ven wrote:
> On Tue, 24 Nov 2009 09:41:18 -0800
> ebiederm@...ssion.com (Eric W. Biederman) wrote:
> > Oii.
> >
> > I don't think it is bad to export information to applications like
> > irqbalance.
> >
> > I think it pretty horrible that one of the standard ways I have heard
> > to improve performance on 10G nics is to kill irqbalance.
>
> irqbalance does not move networking irqs; if it does there's something
> evil going on in the system. But thanks for the bugreport ;)
It does move networking irqs.
>
> we had that; it didn't work.
> what I'm asking for is for the kernel to expose the numa information;
> right now that is the piece that is missing.
>
I'm wondering if we should expose that numa information in the form of a node or the set of allowed cpus, or both?
I'm guessing 'both' is the correct answer, so that apps like irqbalance can make a qualitative decision based on the node (affinity to cpus on this node is better), but an absolute decision based on allowed cpus (I cannot change affinity to anything but this set of cpus).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists