[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ+vNU01JFRXSzF-0OhvyLds03mbbeBbw0by0dwuCotdxpDuog@mail.gmail.com>
Date: Tue, 29 Mar 2016 10:38:16 -0700
From: Tim Harvey <tharvey@...eworks.com>
To: Arnd Bergmann <arnd@...db.de>
Cc: Lucas Stach <l.stach@...gutronix.de>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
Richard Zhu <Richard.Zhu@...escale.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Krzysztof Hałasa <khalasa@...p.pl>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Petr Štetiar <ynezz@...e.cz>,
Fabio Estevam <festevam@...il.com>
Subject: Re: [PATCH] i.MX6 PCIe: Fix imx6_pcie_deassert_core_reset() polarity
On Tue, Mar 29, 2016 at 8:24 AM, Arnd Bergmann <arnd@...db.de> wrote:
> On Tuesday 29 March 2016 08:10:08 Tim Harvey wrote:
>> Arnd,
>>
>> Right, on the IMX the MSI interrupt is GIC-120 which is also the
>> legacy INTD and I do see that if I happen to put a radio in a slot
>> where due to swizzling its pin1 becomes INTD (GIC-120) the interrupt
>> does fire and the device works. Any other slot using GIC-123 (INTA),
>> GIC-122 (INTB), or GIC-121 (INTC) never fires so its very possible
>> that something in the designware core is masking out the legacy irqs.
>
> Interesting. I was actually expecting the opposite here, having the
> IRQs only work if they are not IntD.
>
>
>> I typically advise our users to 'not' enable MSI because
>> architecturally you can spread 4 distinct legacy irq's across CPU's
>> better than a single shared irq.
>
> That is a very good point, I never understood why we want to enable
> MSI support on any PCI host bridge that just forwards all MSIs
> to a single IRQ line. Originally MSI was meant as a performance
> feature, but there is nothing in this setup that makes things go
> faster, and several things that make it go slower.
I had a conversation once with Lucas about implementing the shared MSI
interrupt in such a way that its smp affinity could be set to other
CPU's to gain a performance benefit in certain multi-device cases.
While this is technically possible it would involve creating a softirq
glue between the different handlers but that would add overhead of a
softirq plus potentially waking up another CPU to every IRQ which
would end up adding some overhead to even the simple single-device
case.
Without any hard data it wasn't clear if this was worth it or if there
was a clean way to provide this as build-time or run-time option.
Tim
Powered by blists - more mailing lists