[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABLWAfR+Nk85wurWnPArpmFPXVhxn0jZ+7H13WnKEPCMJVU0XA@mail.gmail.com>
Date: Wed, 11 Oct 2017 10:26:39 +0200
From: Sandor Bodo-Merle <sbodomerle@...il.com>
To: Ray Jui <ray.jui@...adcom.com>
Cc: Bodo-Merle Sandor <esndbod@...il.com>, linux-pci@...r.kernel.org,
Bjorn Helgaas <bhelgaas@...gle.com>,
Ray Jui <rjui@...adcom.com>,
Scott Branden <sbranden@...adcom.com>,
Jon Mason <jonmason@...adcom.com>,
bcm-kernel-feedback-list@...adcom.com,
Shawn Lin <shawn.lin@...k-chips.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] PCI: iproc: Allow allocation of multiple MSIs
Hi Ray,
we tested on a custom board based on BCM56260. SMP affinity was not
tested as our board runs on a single core.
br,
Sandor
ps - sorry for the duplicate, but by default gmail sent out html
formatted mail :(
On Tue, Oct 10, 2017 at 8:09 PM, Ray Jui <ray.jui@...adcom.com> wrote:
> Hi Bodo,
>
>
> On 10/7/2017 5:08 AM, Bodo-Merle Sandor wrote:
>>
>> From: Sandor Bodo-Merle <sbodomerle@...il.com>
>>
>> Add support for allocating multiple MSIs at the same time, so that the
>> MSI_FLAG_MULTI_PCI_MSI flag can be added to the msi_domain_info
>> structure.
>>
>> Avoid storing the hwirq in the low 5 bits of the message data, as it is
>> used by the device. Also fix an endianness problem by using readl().
>>
>> Signed-off-by: Sandor Bodo-Merle <sbodomerle@...il.com>
>> ---
>> drivers/pci/host/pcie-iproc-msi.c | 19 ++++++++++++-------
>> 1 file changed, 12 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/pci/host/pcie-iproc-msi.c
>> b/drivers/pci/host/pcie-iproc-msi.c
>> index 2d0f535a2f69..990fc906d73d 100644
>> --- a/drivers/pci/host/pcie-iproc-msi.c
>> +++ b/drivers/pci/host/pcie-iproc-msi.c
>> @@ -179,7 +179,7 @@ static struct irq_chip iproc_msi_irq_chip = {
>> static struct msi_domain_info iproc_msi_domain_info = {
>> .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
>> - MSI_FLAG_PCI_MSIX,
>> + MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX,
>> .chip = &iproc_msi_irq_chip,
>> };
>> @@ -237,7 +237,7 @@ static void iproc_msi_irq_compose_msi_msg(struct
>> irq_data *data,
>> addr = msi->msi_addr + iproc_msi_addr_offset(msi, data->hwirq);
>> msg->address_lo = lower_32_bits(addr);
>> msg->address_hi = upper_32_bits(addr);
>> - msg->data = data->hwirq;
>> + msg->data = data->hwirq << 5; > }
>> static struct irq_chip iproc_msi_bottom_irq_chip = {
>> @@ -251,7 +251,7 @@ static int iproc_msi_irq_domain_alloc(struct
>> irq_domain *domain,
>> void *args)
>> {
>> struct iproc_msi *msi = domain->host_data;
>> - int hwirq;
>> + int hwirq, i;
>> mutex_lock(&msi->bitmap_lock);
>> @@ -267,10 +267,14 @@ static int iproc_msi_irq_domain_alloc(struct
>> irq_domain *domain,
>> mutex_unlock(&msi->bitmap_lock);
>> - irq_domain_set_info(domain, virq, hwirq,
>> &iproc_msi_bottom_irq_chip,
>> - domain->host_data, handle_simple_irq, NULL,
>> NULL);
>> + for (i = 0; i < nr_irqs; i++) {
>> + irq_domain_set_info(domain, virq + i, hwirq + i,
>> + &iproc_msi_bottom_irq_chip,
>> + domain->host_data, handle_simple_irq,
>> + NULL, NULL);
>> + }
>> - return 0;
>> + return hwirq;
>> }
>> static void iproc_msi_irq_domain_free(struct irq_domain *domain,
>> @@ -302,7 +306,8 @@ static inline u32 decode_msi_hwirq(struct iproc_msi
>> *msi, u32 eq, u32 head)
>> offs = iproc_msi_eq_offset(msi, eq) + head * sizeof(u32);
>> msg = (u32 *)(msi->eq_cpu + offs);
>> - hwirq = *msg & IPROC_MSI_EQ_MASK;
>> + hwirq = readl(msg);
>> + hwirq = (hwirq >> 5) + (hwirq & 0x1f);
>> /*
>> * Since we have multiple hwirq mapped to a single MSI vector,nnn
>>
>
> Change looks okay to me in general. May I know which platform you tested
> this patch on and was SMP affinity configuration tested?
>
> Thanks,
>
> Ray
Powered by blists - more mailing lists