[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20061023173430.GX4354@rhun.haifa.ibm.com>
Date: Mon, 23 Oct 2006 19:34:30 +0200
From: Muli Ben-Yehuda <muli@...ibm.com>
To: Yinghai Lu <yinghai.lu@....com>
Cc: Andi Kleen <ak@....de>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...l.org>, Adrian Bunk <bunk@...sta.de>
Subject: Re: [PATCH] x86_64 irq: reuse vector for set_xxx_irq_affinity in phys flat mode
On Mon, Oct 23, 2006 at 12:02:44AM -0700, Yinghai Lu wrote:
> in phys flat mode, when using set_xxx_irq_affinity to irq balance from
> one cpu to another, _assign_irq_vector will get to increase last used
> vector and get new vector. this will use up the vector if enough
> set_xxx_irq_affintiy are called. and end with using same vector in
> different cpu for different irq. (that is not what we want, we only
> want to use same vector in different cpu for different irq when more
> than 0x240 irq needed). To keep it simple, the vector should be resued
> from one cpu to another instead of getting new vector.
>
> Signed-off-by: Yinghai Lu <yinghai.lu@....com>
> diff --git a/arch/x86_64/kernel/io_apic.c b/arch/x86_64/kernel/io_apic.c
> index b000017..3989fa5 100644
> --- a/arch/x86_64/kernel/io_apic.c
> +++ b/arch/x86_64/kernel/io_apic.c
> @@ -624,11 +624,32 @@ static int __assign_irq_vector(int irq,
> if (irq_vector[irq] > 0)
> old_vector = irq_vector[irq];
> if (old_vector > 0) {
> + cpumask_t domain, new_mask, old_mask;
> + int new_cpu, old_cpu;
> cpus_and(*result, irq_domain[irq], mask);
> if (!cpus_empty(*result))
> return old_vector;
> +
> + /* try to reuse vector for phys flat */
> + domain = vector_allocation_domain(cpu);
cpu is used unitialized here. Please send an updated patch and I'll
give it a spin.
Cheers,
Muli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists