[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52B336D4.8010809@redhat.com>
Date: Thu, 19 Dec 2013 13:11:32 -0500
From: Prarit Bhargava <prarit@...hat.com>
To: Tony Luck <tony.luck@...il.com>
CC: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, X86-ML <x86@...nel.org>,
Michel Lespinasse <walken@...gle.com>,
Andi Kleen <ak@...ux.intel.com>,
Seiji Aguchi <seiji.aguchi@....com>,
Yang Zhang <yang.z.zhang@...el.com>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
janet.morgan@...el.com, "Yu, Fenghua" <fenghua.yu@...el.com>
Subject: Re: [PATCH] x86: Add check for number of available vectors before
CPU down [v2]
On 12/19/2013 01:05 PM, Tony Luck wrote:
> On Wed, Dec 18, 2013 at 11:50 AM, Tony Luck <tony.luck@...el.com> wrote:
>> Looks good to me.
>
> Though now I've been confused by an offline question about affinity.
Heh :) I'm pursuing it now. Rui has asked a pretty good question that I don't
know the answer to off the top of my head. I'm still looking at the code.
>
> Suppose we have some interrupt that has affinity to multiple cpus. E.g.
> (real example from one of my machines):
>
> # cat /proc/irq/94/smp_affinity_list
> 26,54
>
> Now If I want to take either cpu26 or cpu54 offline - I'm guessing that I don't
> really need to find a new home for vector 94 - because the other one of that
> pair already has that set up. But your check_vectors code doesn't look like
> it accounts for that - if we take cpu26 offline - it would see that
> cpu54 doesn't
> have 94 free - but doesn't check that it is for the same interrupt.
>
> But I may be mixing "vectors" and "irqs" here.
Yep. The question really is this: is the irq mapped to a single vector or
multiple vectors. (I think)
P.
>
> -Tony
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists