[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <16fbd3eb-92e2-4e6f-b020-4f5a2feee4ad@nvidia.com>
Date: Mon, 5 Aug 2024 08:34:47 +0300
From: Shay Drori <shayd@...dia.com>
To: Thomas Gleixner <tglx@...utronix.de>
CC: <linux-kernel@...r.kernel.org>
Subject: Re: pci_msix_alloc_irq_at() affinity
On 26/07/2024 16:48, Thomas Gleixner wrote:
> External email: Use caution opening links or attachments
>
>
> On Thu, Jul 25 2024 at 08:34, Shay Drori wrote:
>> I did some testing with pci_msix_alloc_irq_at() and I noticed that the
>> affinity provided, via “struct irq_affinity_desc *af_desc”, doesn’t have
>> any affect.
>>
>> After some digging, I found out that irq_setup_affinity(), which is
>> called by request_irq(), is setting the affinity as all the CPUs online,
>> ignoring the affinity provided in pci_msix_alloc_irq_at().
>> Is this on purpose or a bug?
>
> It's an oversight. So far this has only been used with managed
> interrupts and the non-managed parts at the beginning or end of the
> interrupt group have been assigned the default affinity which makes this
> obviously a non-problem because the startup code uses the default
> affinity too.
>
>> P.S. The bellow diff honors the affinity provided in
>> pci_msix_alloc_irq_at()
>>
>> --- a/kernel/irq/irqdesc.c
>> +++ b/kernel/irq/irqdesc.c
>> @@ -530,6 +530,7 @@ static int alloc_descs(unsigned int start, unsigned
>> int cnt, int node,
>> flags = IRQD_AFFINITY_MANAGED |
>> IRQD_MANAGED_SHUTDOWN;
>> }
>> + flags |= IRQD_AFFINITY_SET;
>> mask = &affinity->mask;
>> node = cpu_to_node(cpumask_first(mask));
>> affinity++;
>
> Looks about right, though the diff is whitespace damaged.
>
> Care to submit a proper patch?
sorry for the late reply, Yes.
on-top of which kernel branch to create the patch?
>
> Thanks,
>
> tglx
Powered by blists - more mailing lists