[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D193533.6060103@kernel.org>
Date: Mon, 27 Dec 2010 16:54:11 -0800
From: Yinghai Lu <yinghai@...nel.org>
To: Ingo Molnar <mingo@...e.hu>, Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: [PATCH] x86, sparseirq: let nr_irqs equal to NR_IRQS
For x86_64 system:
when we have 128 cpus with 5 ioapics, will have nr_irqs = 3064
120 + 8 * 128 + 120 * 16
systems could take 20 pcie, when intel 10g are used with
sriov and ixgbevf, every vf will need 3 irqs, and one device
have 64 vf. so will need 20 * 3 * 64 = 3840.
some 6 ports Intel 10gb may need more.
Just remove that function for x86, and let nr_irqs to NR_IRQS
because We already have radix-tree and bit_map for searching desc for irq.
Notes: long before same vresion cause one of Ingo's setup udev hang...
Signed-off-by: Yinghai Lu <yinghai@...nel.org>
---
arch/x86/kernel/apic/io_apic.c | 22 ----------------------
1 file changed, 22 deletions(-)
Index: linux-2.6/arch/x86/kernel/apic/io_apic.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/apic/io_apic.c
+++ linux-2.6/arch/x86/kernel/apic/io_apic.c
@@ -3637,28 +3637,6 @@ int get_nr_irqs_gsi(void)
return nr_irqs_gsi;
}
-#ifdef CONFIG_SPARSE_IRQ
-int __init arch_probe_nr_irqs(void)
-{
- int nr;
-
- if (nr_irqs > (NR_VECTORS * nr_cpu_ids))
- nr_irqs = NR_VECTORS * nr_cpu_ids;
-
- nr = nr_irqs_gsi + 8 * nr_cpu_ids;
-#if defined(CONFIG_PCI_MSI) || defined(CONFIG_HT_IRQ)
- /*
- * for MSI and HT dyn irq
- */
- nr += nr_irqs_gsi * 16;
-#endif
- if (nr < nr_irqs)
- nr_irqs = nr;
-
- return NR_IRQS_LEGACY;
-}
-#endif
-
static int __io_apic_set_pci_routing(struct device *dev, int irq,
struct io_apic_irq_attr *irq_attr)
{
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists