[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210610233257.GA2794291@bjorn-Precision-5520>
Date: Thu, 10 Jun 2021 18:32:57 -0500
From: Bjorn Helgaas <helgaas@...nel.org>
To: Sandor Bodo-Merle <sbodomerle@...il.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Rob Herring <robh@...nel.org>,
Krzysztof Wilczyński <kw@...ux.com>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Ray Jui <rjui@...adcom.com>,
Scott Branden <sbranden@...adcom.com>,
bcm-kernel-feedback-list@...adcom.com, linux-pci@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
Pali Rohár <pali@...nel.org>
Subject: Re: [PATCH 1/2] PCI: iproc: fix the base vector number allocation
for multi-MSI
On Sun, Jun 06, 2021 at 02:30:43PM +0200, Sandor Bodo-Merle wrote:
> Commit fc54bae28818 ("PCI: iproc: Allow allocation of multiple MSIs")
> introduced multi-MSI support with a broken allocation mechanism (it failed
> to reserve the proper number of bits from the inner domain). Natural
> alignment of the base vector number was also not guaranteed.
>
> Fixes: fc54bae28818 ("PCI: iproc: Allow allocation of multiple MSIs")
> Reported-by: Pali Rohár <pali@...nel.org>
> Signed-off-by: Sandor Bodo-Merle <sbodomerle@...il.com>
Looks good to me; thanks for splitting this. I think Lorenzo will
take care of this and maybe he'll also adjust the subject to match the
other patch, e.g.,
- PCI: iproc: fix the base vector number allocation for multi-MSI
+ PCI: iproc: Fix multi-MSI base vector number allocation
> ---
> drivers/pci/controller/pcie-iproc-msi.c | 21 +++++++++++----------
> 1 file changed, 11 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/pci/controller/pcie-iproc-msi.c b/drivers/pci/controller/pcie-iproc-msi.c
> index eede4e8f3f75..557d93dcb3bc 100644
> --- a/drivers/pci/controller/pcie-iproc-msi.c
> +++ b/drivers/pci/controller/pcie-iproc-msi.c
> @@ -252,18 +252,18 @@ static int iproc_msi_irq_domain_alloc(struct irq_domain *domain,
>
> mutex_lock(&msi->bitmap_lock);
>
> - /* Allocate 'nr_cpus' number of MSI vectors each time */
> - hwirq = bitmap_find_next_zero_area(msi->bitmap, msi->nr_msi_vecs, 0,
> - msi->nr_cpus, 0);
> - if (hwirq < msi->nr_msi_vecs) {
> - bitmap_set(msi->bitmap, hwirq, msi->nr_cpus);
> - } else {
> - mutex_unlock(&msi->bitmap_lock);
> - return -ENOSPC;
> - }
> + /*
> + * Allocate 'nr_irqs' multiplied by 'nr_cpus' number of MSI vectors
> + * each time
> + */
> + hwirq = bitmap_find_free_region(msi->bitmap, msi->nr_msi_vecs,
> + order_base_2(msi->nr_cpus * nr_irqs));
>
> mutex_unlock(&msi->bitmap_lock);
>
> + if (hwirq < 0)
> + return -ENOSPC;
> +
> for (i = 0; i < nr_irqs; i++) {
> irq_domain_set_info(domain, virq + i, hwirq + i,
> &iproc_msi_bottom_irq_chip,
> @@ -284,7 +284,8 @@ static void iproc_msi_irq_domain_free(struct irq_domain *domain,
> mutex_lock(&msi->bitmap_lock);
>
> hwirq = hwirq_to_canonical_hwirq(msi, data->hwirq);
> - bitmap_clear(msi->bitmap, hwirq, msi->nr_cpus);
> + bitmap_release_region(msi->bitmap, hwirq,
> + order_base_2(msi->nr_cpus * nr_irqs));
>
> mutex_unlock(&msi->bitmap_lock);
>
> --
> 2.31.0
>
Powered by blists - more mailing lists