[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B561D37.9040007@kernel.org>
Date: Tue, 19 Jan 2010 12:59:35 -0800
From: Yinghai Lu <yinghai@...nel.org>
To: Suresh Siddha <suresh.b.siddha@...el.com>
CC: "H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [patch] x86, irq: don't block IRQ0_VECTOR..IRQ15_VECTOR's on
all cpu's
On 01/19/2010 12:20 PM, Suresh Siddha wrote:
> Currently IRQ0..IRQ15 are assigned to IRQ0_VECTOR..IRQ15_VECTOR's on
> all the cpu's.
>
> If these IRQ's are handled by legacy pic controller, then the kernel
> handles them only on cpu 0. So there is no need to block this vector
> space on all cpu's.
>
> Similarly if these IRQ's are handled by IO-APIC, then the irq affinity
> will determine on which cpu's we need allocate the vector resource for that
> particular IRQ. This can be done dynamically and here also there is no need
> to block 16 vectors for IRQ0..IRQ15 on all cpu's.
>
> Fix this by initially assigning IRQ0..IRQ15 to IRQ0_VECTOR..IRQ15_VECTOR's only
> on cpu 0. If the legacy controllers like pic handles these irq's, then
> this configuration will be fixed. If more modern controllers like IO-APIC
> handle these IRQ's, then we start with this configuration and as IRQ's
> migrate, vectors (/and cpu's) associated with these IRQ's change dynamically.
>
> This will freeup the block of 16 vectors on other cpu's which don't handle
> IRQ0..IRQ15, which can now be used for other IRQ's that the particular cpu
> handle.
>
> Signed-off-by: Suresh Siddha <suresh.b.siddha@...el.com>
> ---
> arch/x86/include/asm/irq.h | 1 +
> arch/x86/kernel/apic/io_apic.c | 33 ++++++++++-----------------------
> arch/x86/kernel/irqinit.c | 35 +++++++++++++++++------------------
> arch/x86/kernel/vmiclock_32.c | 2 ++
> 4 files changed, 30 insertions(+), 41 deletions(-)
>
> Index: tip/arch/x86/kernel/apic/io_apic.c
> ===================================================================
> --- tip.orig/arch/x86/kernel/apic/io_apic.c
> +++ tip/arch/x86/kernel/apic/io_apic.c
> @@ -94,8 +94,6 @@ struct mpc_intsrc mp_irqs[MAX_IRQ_SOURCE
> /* # of MP IRQ source entries */
> int mp_irq_entries;
>
> -/* Number of legacy interrupts */
> -static int nr_legacy_irqs __read_mostly = NR_IRQS_LEGACY;
> /* GSI interrupts */
> static int nr_irqs_gsi = NR_IRQS_LEGACY;
>
> @@ -140,27 +138,10 @@ static struct irq_pin_list *get_one_free
>
> /* irq_cfg is indexed by the sum of all RTEs in all I/O APICs. */
> #ifdef CONFIG_SPARSE_IRQ
> -static struct irq_cfg irq_cfgx[] = {
> +static struct irq_cfg irq_cfgx[NR_LEGACY_IRQS];
> #else
> -static struct irq_cfg irq_cfgx[NR_IRQS] = {
> +static struct irq_cfg irq_cfgx[NR_IRQS];
> #endif
> - [0] = { .vector = IRQ0_VECTOR, },
> - [1] = { .vector = IRQ1_VECTOR, },
> - [2] = { .vector = IRQ2_VECTOR, },
> - [3] = { .vector = IRQ3_VECTOR, },
> - [4] = { .vector = IRQ4_VECTOR, },
> - [5] = { .vector = IRQ5_VECTOR, },
> - [6] = { .vector = IRQ6_VECTOR, },
> - [7] = { .vector = IRQ7_VECTOR, },
> - [8] = { .vector = IRQ8_VECTOR, },
> - [9] = { .vector = IRQ9_VECTOR, },
> - [10] = { .vector = IRQ10_VECTOR, },
> - [11] = { .vector = IRQ11_VECTOR, },
> - [12] = { .vector = IRQ12_VECTOR, },
> - [13] = { .vector = IRQ13_VECTOR, },
> - [14] = { .vector = IRQ14_VECTOR, },
> - [15] = { .vector = IRQ15_VECTOR, },
> -};
>
> void __init io_apic_disable_legacy(void)
> {
> @@ -185,8 +166,14 @@ int __init arch_early_irq_init(void)
> desc->chip_data = &cfg[i];
> zalloc_cpumask_var_node(&cfg[i].domain, GFP_NOWAIT, node);
> zalloc_cpumask_var_node(&cfg[i].old_domain, GFP_NOWAIT, node);
> - if (i < nr_legacy_irqs)
> - cpumask_setall(cfg[i].domain);
> + /*
> + * For legacy IRQ's, start with assigning irq0 to irq15 to
> + * IRQ0_VECTOR to IRQ15_VECTOR on cpu 0.
> + */
> + if (i < nr_legacy_irqs) {
> + cfg[i].vector = IRQ0_VECTOR + i;
> + cpumask_set_cpu(0, cfg[i].domain);
> + }
when PIC is used, if the user is setting /proc/irq/[0-15]/smp_affinity to cpu other than 0,
we need to prevent that happen.
YH
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists