lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 17 Oct 2017 14:03:00 -0500
From:   Bjorn Helgaas <helgaas@...nel.org>
To:     Bodo-Merle Sandor <esndbod@...il.com>
Cc:     linux-pci@...r.kernel.org, Scott Branden <sbranden@...adcom.com>,
        Jon Mason <jonmason@...adcom.com>, Ray Jui <rjui@...adcom.com>,
        Shawn Lin <shawn.lin@...k-chips.com>,
        linux-kernel@...r.kernel.org,
        bcm-kernel-feedback-list@...adcom.com,
        Bjorn Helgaas <bhelgaas@...gle.com>,
        Sandor Bodo-Merle <sbodomerle@...il.com>,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH] PCI: iproc: Allow allocation of multiple MSIs

On Sat, Oct 07, 2017 at 02:08:44PM +0200, Bodo-Merle Sandor wrote:
> From: Sandor Bodo-Merle <sbodomerle@...il.com>
> 
> Add support for allocating multiple MSIs at the same time, so that the
> MSI_FLAG_MULTI_PCI_MSI flag can be added to the msi_domain_info
> structure.
> 
> Avoid storing the hwirq in the low 5 bits of the message data, as it is
> used by the device. Also fix an endianness problem by using readl().
> 
> Signed-off-by: Sandor Bodo-Merle <sbodomerle@...il.com>

Applied with Ray's reviewed-by to pci/host-iproc for v4.15, thanks!

BTW, I saw Ray's reviewed-by and associated discussion because I was
personally addressed, but it didn't appear on linux-pci, probably
because the emails were not plain text; see
http://vger.kernel.org/majordomo-info.html

> ---
>  drivers/pci/host/pcie-iproc-msi.c | 19 ++++++++++++-------
>  1 file changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/pci/host/pcie-iproc-msi.c b/drivers/pci/host/pcie-iproc-msi.c
> index 2d0f535a2f69..990fc906d73d 100644
> --- a/drivers/pci/host/pcie-iproc-msi.c
> +++ b/drivers/pci/host/pcie-iproc-msi.c
> @@ -179,7 +179,7 @@ static struct irq_chip iproc_msi_irq_chip = {
>  
>  static struct msi_domain_info iproc_msi_domain_info = {
>  	.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
> -		MSI_FLAG_PCI_MSIX,
> +		MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX,
>  	.chip = &iproc_msi_irq_chip,
>  };
>  
> @@ -237,7 +237,7 @@ static void iproc_msi_irq_compose_msi_msg(struct irq_data *data,
>  	addr = msi->msi_addr + iproc_msi_addr_offset(msi, data->hwirq);
>  	msg->address_lo = lower_32_bits(addr);
>  	msg->address_hi = upper_32_bits(addr);
> -	msg->data = data->hwirq;
> +	msg->data = data->hwirq << 5;
>  }
>  
>  static struct irq_chip iproc_msi_bottom_irq_chip = {
> @@ -251,7 +251,7 @@ static int iproc_msi_irq_domain_alloc(struct irq_domain *domain,
>  				      void *args)
>  {
>  	struct iproc_msi *msi = domain->host_data;
> -	int hwirq;
> +	int hwirq, i;
>  
>  	mutex_lock(&msi->bitmap_lock);
>  
> @@ -267,10 +267,14 @@ static int iproc_msi_irq_domain_alloc(struct irq_domain *domain,
>  
>  	mutex_unlock(&msi->bitmap_lock);
>  
> -	irq_domain_set_info(domain, virq, hwirq, &iproc_msi_bottom_irq_chip,
> -			    domain->host_data, handle_simple_irq, NULL, NULL);
> +	for (i = 0; i < nr_irqs; i++) {
> +		irq_domain_set_info(domain, virq + i, hwirq + i,
> +				    &iproc_msi_bottom_irq_chip,
> +				    domain->host_data, handle_simple_irq,
> +				    NULL, NULL);
> +	}
>  
> -	return 0;
> +	return hwirq;
>  }
>  
>  static void iproc_msi_irq_domain_free(struct irq_domain *domain,
> @@ -302,7 +306,8 @@ static inline u32 decode_msi_hwirq(struct iproc_msi *msi, u32 eq, u32 head)
>  
>  	offs = iproc_msi_eq_offset(msi, eq) + head * sizeof(u32);
>  	msg = (u32 *)(msi->eq_cpu + offs);
> -	hwirq = *msg & IPROC_MSI_EQ_MASK;
> +	hwirq = readl(msg);
> +	hwirq = (hwirq >> 5) + (hwirq & 0x1f);
>  
>  	/*
>  	 * Since we have multiple hwirq mapped to a single MSI vector,
> -- 
> 2.15.0.rc0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@...ts.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ