lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <868tpr8u3r.fsf@arm.com>
Date:   Tue, 31 Jan 2017 09:19:20 +0000
From:   Marc Zyngier <marc.zyngier@....com>
To:     Bharat Kumar Gogada <bharat.kumar.gogada@...inx.com>
Cc:     <bhelgaas@...gle.com>, <paul.gortmaker@...driver.com>,
        <robh@...nel.org>, <colin.king@...onical.com>,
        <linux-pci@...r.kernel.org>, <michal.simek@...inx.com>,
        <linux-arm-kernel@...ts.infradead.org>,
        <linux-kernel@...r.kernel.org>, <rgummal@...inx.com>,
        <arnd@...db.de>, "Bharat Kumar Gogada" <bharatku@...inx.com>
Subject: Re: [PATCH v3] PCI: Xilinx NWL: Modifying irq chip for legacy interrupts

On Tue, Jan 31 2017 at 08:59:12 AM, Bharat Kumar Gogada <bharat.kumar.gogada@...inx.com> wrote:
> - Adding mutex lock for protecting legacy mask register
> - Few wifi end points which only support legacy interrupts,
> performs hardware reset functionalities after disabling interrupts
> by invoking disable_irq and then re-enable using enable_irq, they
> enable hardware interrupts first and then virtual irq line later.
> - The legacy irq line goes low only after DEASSERT_INTx is
> received.As the legacy irq line is high immediately after hardware
> interrupts are enabled but virq of EP is still in disabled state
> and EP handler is never executed resulting no DEASSERT_INTx.If dummy
> irq chip is used, interrutps are not masked and system is
> hanging with CPU stall.
> - Adding irq chip functions instead of dummy irq chip for legacy
> interrupts.
> - Legacy interrupts are level sensitive, so using handle_level_irq
> is more appropriate as it is masks interrupts until End point handles
> interrupts and unmasks interrutps after End point handler is executed.
> - Legacy interrupts are level triggered, virtual irq line of End
> Point shows as edge in /proc/interrupts.
> - Setting irq flags of virtual irq line of EP to level triggered
> at the time of mapping.
>
> Signed-off-by: Bharat Kumar Gogada <bharatku@...inx.com>
> ---
>  drivers/pci/host/pcie-xilinx-nwl.c |   43 +++++++++++++++++++++++++++++++++++-
>  1 files changed, 42 insertions(+), 1 deletions(-)
>
> diff --git a/drivers/pci/host/pcie-xilinx-nwl.c b/drivers/pci/host/pcie-xilinx-nwl.c
> index 43eaa4a..76dd094 100644
> --- a/drivers/pci/host/pcie-xilinx-nwl.c
> +++ b/drivers/pci/host/pcie-xilinx-nwl.c
> @@ -184,6 +184,7 @@ struct nwl_pcie {
>  	u8 root_busno;
>  	struct nwl_msi msi;
>  	struct irq_domain *legacy_irq_domain;
> +	struct mutex leg_mask_lock;
>  };
>  
>  static inline u32 nwl_bridge_readl(struct nwl_pcie *pcie, u32 off)
> @@ -395,11 +396,50 @@ static void nwl_pcie_msi_handler_low(struct irq_desc *desc)
>  	chained_irq_exit(chip, desc);
>  }
>  
> +static void nwl_mask_leg_irq(struct irq_data *data)
> +{
> +	struct irq_desc *desc = irq_to_desc(data->irq);
> +	struct nwl_pcie *pcie;
> +	u32 mask;
> +	u32 val;
> +
> +	pcie = irq_desc_get_chip_data(desc);
> +	mask = 1 << (data->hwirq - 1);
> +	mutex_lock(&pcie->leg_mask_lock);
> +	val = nwl_bridge_readl(pcie, MSGF_LEG_MASK);
> +	nwl_bridge_writel(pcie, (val & (~mask)), MSGF_LEG_MASK);
> +	mutex_unlock(&pcie->leg_mask_lock);

Have you looked at which context this is called in? In a number of
cases, the mask/unmask methods are called whilst you're in an interrupt
context. If you sleep there (which is what happens with a contended
mutex), you die horribly.

Given these constraints, you should use raw_spin_lock_irqsave and co,
since this can be called from both interrupt and non-interrupt contexts.

Thanks,

        M.
-- 
Jazz is not dead. It just smells funny.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ