[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAhSdy0T1G7=XVSmtYxONtfk+5-XYnv3qWyFL2Nnp-MS3aQroA@mail.gmail.com>
Date: Wed, 30 Nov 2022 22:44:09 +0530
From: Anup Patel <anup@...infault.org>
To: Marc Zyngier <maz@...nel.org>
Cc: Anup Patel <apatel@...tanamicro.com>,
Palmer Dabbelt <palmer@...belt.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Thomas Gleixner <tglx@...utronix.de>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Atish Patra <atishp@...shpatra.org>,
Alistair Francis <Alistair.Francis@....com>,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v13 4/7] RISC-V: Treat IPIs as normal Linux IRQs
On Wed, Nov 30, 2022 at 9:48 PM Marc Zyngier <maz@...nel.org> wrote:
>
> On Tue, 29 Nov 2022 14:24:46 +0000,
> Anup Patel <apatel@...tanamicro.com> wrote:
> >
> > Currently, the RISC-V kernel provides arch specific hooks (i.e.
> > struct riscv_ipi_ops) to register IPI handling methods. The stats
> > gathering of IPIs is also arch specific in the RISC-V kernel.
> >
> > Other architectures (such as ARM, ARM64, and MIPS) have moved away
> > from custom arch specific IPI handling methods. Currently, these
> > architectures have Linux irqchip drivers providing a range of Linux
> > IRQ numbers to be used as IPIs and IPI triggering is done using
> > generic IPI APIs. This approach allows architectures to treat IPIs
> > as normal Linux IRQs and IPI stats gathering is done by the generic
> > Linux IRQ subsystem.
> >
> > We extend the RISC-V IPI handling as-per above approach so that arch
> > specific IPI handling methods (struct riscv_ipi_ops) can be removed
> > and the IPI handling is done through the Linux IRQ subsystem.
> >
> > Signed-off-by: Anup Patel <apatel@...tanamicro.com>
> > ---
> > arch/riscv/Kconfig | 2 +
> > arch/riscv/include/asm/sbi.h | 10 +-
> > arch/riscv/include/asm/smp.h | 35 ++++---
> > arch/riscv/kernel/Makefile | 1 +
> > arch/riscv/kernel/cpu-hotplug.c | 3 +-
> > arch/riscv/kernel/irq.c | 3 +-
> > arch/riscv/kernel/sbi-ipi.c | 81 ++++++++++++++++
> > arch/riscv/kernel/sbi.c | 106 +++-----------------
> > arch/riscv/kernel/smp.c | 155 +++++++++++++++---------------
> > arch/riscv/kernel/smpboot.c | 5 +-
> > drivers/clocksource/timer-clint.c | 65 ++++++++++---
> > drivers/irqchip/Kconfig | 1 +
> > drivers/irqchip/irq-riscv-intc.c | 55 +++++------
> > 13 files changed, 287 insertions(+), 235 deletions(-)
> > create mode 100644 arch/riscv/kernel/sbi-ipi.c
> >
>
> [...]
>
> > diff --git a/arch/riscv/kernel/sbi-ipi.c b/arch/riscv/kernel/sbi-ipi.c
> > new file mode 100644
> > index 000000000000..6466706b03a7
> > --- /dev/null
> > +++ b/arch/riscv/kernel/sbi-ipi.c
> > @@ -0,0 +1,81 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*
> > + * Multiplex several IPIs over a single HW IPI.
> > + *
> > + * Copyright (c) 2022 Ventana Micro Systems Inc.
> > + */
> > +
> > +#define pr_fmt(fmt) "riscv: " fmt
> > +#include <linux/cpu.h>
> > +#include <linux/init.h>
> > +#include <linux/irq.h>
> > +#include <linux/irqdomain.h>
> > +#include <linux/percpu.h>
> > +#include <asm/sbi.h>
> > +
> > +static int sbi_ipi_virq;
> > +static DEFINE_PER_CPU_READ_MOSTLY(int, sbi_ipi_dummy_dev);
> > +
> > +static irqreturn_t sbi_ipi_handle(int irq, void *dev_id)
> > +{
> > + csr_clear(CSR_IP, IE_SIE);
> > + ipi_mux_process();
> > + return IRQ_HANDLED;
>
> Urgh... I really wish I hadn't seen this. This requires a chained
> handler. You had it before, and yet you dropped it. Why?
>
> Either you call ipi_mux_process() from your root interrupt controller,
> or you implement a chained handler. But not this.
>
> Same thing about the clint stuff.
We had chained handler all along but there is problem (which
was pointed to us) in using chained handler because the parent
RISC-V INTC irqchip driver does not have irq_eoi() so the
chained_irq_enter() and chained_irq_exit() will do the interrupt
mask/unmask dance which seems unnecessary.
Is there a better way to avoid the interrupt mask/unmask dance ?
Regards,
Anup
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists