[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <491CC8D7.7030306@sgi.com>
Date: Thu, 13 Nov 2008 16:39:51 -0800
From: Mike Travis <travis@....com>
To: David Miller <davem@...emloft.net>
CC: paulus@...ba.org, akpm@...ux-foundation.org, yinghai@...nel.org,
mingo@...e.hu, tglx@...utronix.de, hpa@...or.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sparse_irq aka dyn_irq v13
David Miller wrote:
> From: Mike Travis <travis@....com>
> Date: Thu, 13 Nov 2008 16:15:12 -0800
>
>> David Miller wrote:
>>> From: Mike Travis <travis@....com>
>>> Date: Thu, 13 Nov 2008 15:11:29 -0800
>>>
>>> We use a value of 256 and I've been booting linux on 128 cpu sparc64
>>> systems with lots of PCI-E host controllers (and others have booted it
>>> on even larger ones). All of which have several NUMA domains.
>>>
>>> It's not an issue.
>> Are you saying that having a fixed count of IRQ's is not an issue? With
>> NR_CPUS=4096 what would you fix it to? (Currently it's NR_CPUS * 32
>> but that might not be sufficient.) Would NR_CPUS=16384 make it an issue?
>
> Nope, and nope. I frequently run kernels with NR_CPUS set to huge
> values.
>
> It seems that the issue of x86 is that it has it's IRQ count tied to
> the number of cpus, that's not very intelligent. Perhaps that part
> should be rearranged somehow?
Yes, you're probably right but it is what it is. Most of the irq vectors
have more to do with cpus than with i/o devices (the system vectors, ipi,
kdb and gru [a uv thing] interrupt vectors come first to mind.) These do
by necessity need to grow with NR_CPUS, if you're fixing the total IRQ count.
There's been a couple of different proposals to attempt to disassociate
i/o and system vectors though even attempting to guess at the number of
i/o devices is tricky. Every one of the 512 nodes on a UV system *may*
have a number of i/o devices attached to them though practically this
will be rare.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists