[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAhSdy3aOmUSY7x=BVVKCb5JD39H+_zOhBCuwgUxgtJLKqTrDw@mail.gmail.com>
Date: Fri, 23 Oct 2020 17:03:26 +0530
From: Anup Patel <anup@...infault.org>
To: Guo Ren <guoren@...nel.org>
Cc: Palmer Dabbelt <palmerdabbelt@...gle.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Greentime Hu <greentime.hu@...ive.com>,
Zong Li <zong.li@...ive.com>,
Atish Patra <atish.patra@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Jason Cooper <jason@...edaemon.net>,
Marc Zyngier <maz@...nel.org>, wesley@...ive.com,
Yash Shah <yash.shah@...ive.com>,
Christoph Hellwig <hch@....de>,
linux-riscv <linux-riscv@...ts.infradead.org>,
"linux-kernel@...r.kernel.org List" <linux-kernel@...r.kernel.org>,
Guo Ren <guoren@...ux.alibaba.com>
Subject: Re: [PATCH 3/3] irqchip/irq-sifive-plic: Fixup set_affinity enable
irq unexpected
On Fri, Oct 23, 2020 at 3:48 PM <guoren@...nel.org> wrote:
>
> From: Guo Ren <guoren@...ux.alibaba.com>
>
> For PLIC, we only have enable registers to control per hart's irq
> affinity and irq_set_affinity would call plic_irq_toggle to enable
> the IRQ's routing. So we can't enable irq in irq_domain_map before
> request_irq, it'll let uninitialized devices' irq exception.
>
> The solution is to check the irq has been enabled, just like what
> irq-gic-v3 has done in gic_set_affinity.
>
> Signed-off-by: Guo Ren <guoren@...ux.alibaba.com>
> ---
> drivers/irqchip/irq-sifive-plic.c | 45 ++++++++++++++++++++++++++++++++++++---
> 1 file changed, 42 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
> index 0003322..1a63859 100644
> --- a/drivers/irqchip/irq-sifive-plic.c
> +++ b/drivers/irqchip/irq-sifive-plic.c
> @@ -130,6 +130,36 @@ static void plic_irq_mask(struct irq_data *d)
> }
>
> #ifdef CONFIG_SMP
> +static inline bool plic_toggle_is_enabled(struct plic_handler *handler,
> + int hwirq)
> +{
> + u32 __iomem *reg = handler->enable_base + (hwirq / 32) * sizeof(u32);
> + u32 hwirq_mask = 1 << (hwirq % 32);
> +
> + if (readl(reg) & hwirq_mask)
> + return true;
> + else
> + return false;
> +}
> +
> +static inline bool plic_irq_is_enabled(const struct cpumask *mask,
> + struct irq_data *d)
> +{
> + int cpu;
> +
> + for_each_cpu(cpu, mask) {
> + struct plic_handler *handler = per_cpu_ptr(&plic_handlers, cpu);
> +
> + if (!handler->present)
> + continue;
> +
> + if (plic_toggle_is_enabled(handler, d->hwirq))
> + return true;
> + }
> +
> + return false;
> +}
> +
> static int plic_set_affinity(struct irq_data *d,
> const struct cpumask *mask_val, bool force)
> {
> @@ -141,8 +171,10 @@ static int plic_set_affinity(struct irq_data *d,
>
> irq_data_update_effective_affinity(d, &amask);
>
> - plic_irq_toggle(&priv->lmask, d, 0);
> - plic_irq_toggle(&amask, d, 1);
> + if (plic_irq_is_enabled(&priv->lmask, d)) {
> + plic_irq_toggle(&priv->lmask, d, 0);
> + plic_irq_toggle(&amask, d, 1);
> + }
>
> return IRQ_SET_MASK_OK_DONE;
> }
> @@ -168,12 +200,19 @@ static struct irq_chip plic_chip = {
> static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
> irq_hw_number_t hwirq)
> {
> + unsigned int cpu;
> struct plic_priv *priv = d->host_data;
>
> irq_domain_set_info(d, irq, hwirq, &plic_chip, d->host_data,
> handle_fasteoi_irq, NULL, NULL);
> irq_set_noprobe(irq);
> - irq_set_affinity(irq, &priv->lmask);
> +
> + cpu = cpumask_any_and(&priv->lmask, cpu_online_mask);
> + if (WARN_ON_ONCE(cpu >= nr_cpu_ids))
> + return -EINVAL;
> +
> + irq_set_affinity(irq, cpumask_of(cpu));
> +
> return 0;
> }
>
> --
> 2.7.4
>
Greentime (SiFive) has already proposed a patch to fix this.
Refer, https://lkml.org/lkml/2020/10/20/188
Only the plic_irqdomain_map() change which sets the correct
default affinity looks good to me.
Regards,
Anup
Powered by blists - more mailing lists