[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <52B34313.8050905@redhat.com>
Date: Thu, 19 Dec 2013 14:03:47 -0500
From: Prarit Bhargava <prarit@...hat.com>
To: rui wang <ruiv.wang@...il.com>
CC: linux-kernel@...r.kernel.org, x86@...nel.org,
"Chen, Gong" <gong.chen@...el.com>,
"Yu, Fenghua" <fenghua.yu@...el.com>
Subject: Re: [PATCH] x86: Add check for number of available vectors before
CPU down
On 12/19/2013 02:19 AM, rui wang wrote:
> On 12/19/13, Prarit Bhargava <prarit@...hat.com> wrote:
>>
>>
>> On 12/03/2013 09:48 PM, rui wang wrote:
>>> On 11/20/13, Prarit Bhargava <prarit@...hat.com> wrote:
>>> Have you considered the case when an IRQ is destined to more than one CPU?
>>> e.g.:
>>>
>>> bash# cat /proc/irq/89/smp_affinity_list
>>> 30,62
>>> bash#
>>>
>>> In this case offlining CPU30 does not seem to require an empty vector
>>> slot. It seems that we only need to change the affinity mask of irq89.
>>> Your check_vectors() assumed that each irq on the offlining cpu
>>> requires a new vector slot.
>>>
>>
>> Rui,
>>
>> The smp_affinity_list only indicates a preferred destination of the IRQ, not
>> the
>> *actual* location of the CPU. So the IRQ is on one of cpu 30 or 62 but not
>> both
>> simultaneously.
>>
>
> It depends on how IOAPIC (or MSI/MSIx) is configured. An IRQ can be
> simultaneously broadcast to all destination CPUs (Fixed Mode) or
> delivered to the CPU with the lowest priority task (Lowest Priority
> Mode). It's programmed in the Delivery Mode bits of the IOAPIC's IO
> Redirection table registers, or the Message Data Register in the case
> of MSI/MSIx
>
Thanks for clueing me in Rui :). You're right. I do need to do a
if (irq_has_action(irq) || !irqd_is_per_cpu(data) ||
!cpumask_subset(affinity, cpu_online_mask))
instead of just
if (!irqd_is_per_cpu(data))
I'm going to do some additional testing tonight across various systems and will
repost tomorrow if the testing is successful.
P.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists