[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <023bd1b9eaad001fca53f714920280aa@kernel.org>
Date: Mon, 21 Feb 2022 09:07:31 +0000
From: Marc Zyngier <maz@...nel.org>
To: Anup Patel <anup@...infault.org>
Cc: Anup Patel <apatel@...tanamicro.com>,
Palmer Dabbelt <palmer@...belt.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Thomas Gleixner <tglx@...utronix.de>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Atish Patra <atishp@...shpatra.org>,
Alistair Francis <Alistair.Francis@....com>,
linux-riscv <linux-riscv@...ts.infradead.org>,
"linux-kernel@...r.kernel.org List" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 2/6] irqchip/riscv-intc: Create domain using named
fwnode
On 2022-02-19 13:03, Anup Patel wrote:
> On Sat, Feb 19, 2022 at 3:02 PM Marc Zyngier <maz@...nel.org> wrote:
>>
>> On 2022-02-19 03:38, Anup Patel wrote:
>> > On Thu, Feb 17, 2022 at 8:42 PM Marc Zyngier <maz@...nel.org> wrote:
>> >>
>> >> On 2022-01-28 05:25, Anup Patel wrote:
>> >> > We should create INTC domain using a synthetic fwnode which will allow
>> >> > drivers (such as RISC-V SBI IPI driver, RISC-V timer driver, RISC-V
>> >> > PMU driver, etc) not having dedicated DT/ACPI node to directly create
>> >> > interrupt mapping for standard local interrupt numbers defined by the
>> >> > RISC-V privileged specification.
>> >> >
>> >> > Signed-off-by: Anup Patel <apatel@...tanamicro.com>
>> >> > ---
>> >> > arch/riscv/include/asm/irq.h | 2 ++
>> >> > arch/riscv/kernel/irq.c | 13 +++++++++++++
>> >> > drivers/clocksource/timer-clint.c | 13 +++++++------
>> >> > drivers/clocksource/timer-riscv.c | 11 ++---------
>> >> > drivers/irqchip/irq-riscv-intc.c | 12 ++++++++++--
>> >> > drivers/irqchip/irq-sifive-plic.c | 19 +++++++++++--------
>> >> > 6 files changed, 45 insertions(+), 25 deletions(-)
>> >> >
>> >> > diff --git a/arch/riscv/include/asm/irq.h
>> >> > b/arch/riscv/include/asm/irq.h
>> >> > index e4c435509983..f85ebaf07505 100644
>> >> > --- a/arch/riscv/include/asm/irq.h
>> >> > +++ b/arch/riscv/include/asm/irq.h
>> >> > @@ -12,6 +12,8 @@
>> >> >
>> >> > #include <asm-generic/irq.h>
>> >> >
>> >> > +extern struct fwnode_handle *riscv_intc_fwnode(void);
>> >> > +
>> >> > extern void __init init_IRQ(void);
>> >> >
>> >> > #endif /* _ASM_RISCV_IRQ_H */
>> >> > diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
>> >> > index 7207fa08d78f..f2fed78ab659 100644
>> >> > --- a/arch/riscv/kernel/irq.c
>> >> > +++ b/arch/riscv/kernel/irq.c
>> >> > @@ -7,9 +7,22 @@
>> >> >
>> >> > #include <linux/interrupt.h>
>> >> > #include <linux/irqchip.h>
>> >> > +#include <linux/irqdomain.h>
>> >> > +#include <linux/module.h>
>> >> > #include <linux/seq_file.h>
>> >> > #include <asm/smp.h>
>> >> >
>> >> > +static struct fwnode_handle *intc_fwnode;
>> >> > +
>> >> > +struct fwnode_handle *riscv_intc_fwnode(void)
>> >> > +{
>> >> > + if (!intc_fwnode)
>> >> > + intc_fwnode = irq_domain_alloc_named_fwnode("RISCV-INTC");
>> >> > +
>> >> > + return intc_fwnode;
>> >> > +}
>> >> > +EXPORT_SYMBOL_GPL(riscv_intc_fwnode);
>> >>
>> >> Why is this created outside of the root interrupt controller driver?
>> >> Furthermore, why do you need to create a new fwnode the first place?
>> >> As far as I can tell, the INTC does have a node, and what you don't
>> >> have is the firmware linkage between PMU (an others) and the INTC.
>> >
>> > Fair enough, I will update this patch to not create a synthetic fwnode.
>> >
>> > The issue is not with INTC driver. We have other drivers and places
>> > (such as SBI IPI driver, SBI PMU driver, and KVM RISC-V AIA support)
>> > where we don't have a way to locate INTC fwnode.
>>
>> And that's exactly what I am talking about: The INTC is OK (sort of),
>> but the firmware is too crap for words, and isn't even able to expose
>> where the various endpoints route their interrupts to.
>
> The firmware is always present at a higher privilege-level hence there
> is no DT node to discover the presence of firmware. The local
> interrupts
> used by the firmware for IPI, Timer and PMU are defined by the RISC-V
> ISA specification.
>
>>
>> Yes, this is probably fine today because you can describe the topology
>> of RISC-V systems on the surface of a post stamp. Once you get to the
>> complexity of a server-grade SoC (or worse, a mobile phone style SoC),
>> this *implicit topology* stuff doesn't fly, because there is no
>> guarantee
>> that all endpoints will always all point to the same controller.
>
> The local interrupts (per-CPU) are always managed by the INTC. The
> interrupt controllers to manage device interrupts (such as PLIC) can
> vary from platform to platform and have INTC as the parent domain.
I don't know how to make it clearer: this isn't about the situation
*today*. It is about what you will have two or five years from now.
Relying on a default topology is stupidly bad, and you will end-up
in a terrible corner eventually, because you can't have *two*
defaults.
>
> We already have high-end interrupt controllers (such as AIA) under
> development which are scalable for server-grade SoC, mobile SoC
> and various other types of SoCs.
>
> We are able to describe the topology of different types of interrupt
> controllers (PLIC as well as AIA) in DT.
>
> The only issue is for drivers which do not have dedicated DT node
> (such as SBI IPI, SBI Timer, SBI PMU, or KVM RISC-V) but the
> upside is that local interrupt numbers used by these drivers is
> clearly defined by the RISC-V ISA specification:
>
> Here are the local interrupts defined by the RISC-V ISA spec:
> IRQ13 => PMU overflow interrupt (used by SBI PMU driver)
> IRQ12 => S-mode guest external interrupt (to be used by KVM RISC-V)
> IRQ11 => M-mode external interrupt (used by firmware)
> IRQ9 => S-mode external interrupt (used by PLIC driver)
> IRQ7 => M-mode timer interrupt
> IRQ5 => S-mode timer interrupt (used by SBI Timer driver)
> IRQ3 => M-mode software interrupt (used by firmware)
> IRQ1 => S-mode software interrupt (used by SBI IPI driver)
Again, you are missing the point. It isn't about the interrupt
number (nobody gives a crap about them). It is about the entity
the device is connected to. No amount of copy pasting of the
spec changes that.
>>
>> >> what you should have instead is something like:
>> >>
>> >> static struct fwnode_handle *(*__get_root_intc_node)(void);
>> >> struct fwnode_handle *riscv_get_root_intc_hwnode(void)
>> >> {
>> >> if (__get_root_intc_node)
>> >> return __get_root_intc_node();
>> >>
>> >> return NULL;
>> >> }
>> >>
>> >> and the corresponding registration interface.
>> >
>> > Thanks, I will follow this suggestion. This is a much better approach
>> > and it will avoid touching existing drivers.
>> >
>> >>
>> >> But either way, something breaks: the INTC has one node per CPU, and
>> >> expect one irqdomain per CPU. Having a single fwnode completely breaks
>> >> the INTC driver (and probably the irqdomain list, as we don't check
>> >> for
>> >> duplicate entries).
>> >>
>> >> > diff --git a/drivers/irqchip/irq-riscv-intc.c
>> >> > b/drivers/irqchip/irq-riscv-intc.c
>> >> > index b65bd8878d4f..26ed62c11768 100644
>> >> > --- a/drivers/irqchip/irq-riscv-intc.c
>> >> > +++ b/drivers/irqchip/irq-riscv-intc.c
>> >> > @@ -112,8 +112,16 @@ static int __init riscv_intc_init(struct
>> >> > device_node *node,
>> >> > if (riscv_hartid_to_cpuid(hartid) != smp_processor_id())
>> >> > return 0;
>> >> >
>> >> > - intc_domain = irq_domain_add_linear(node, BITS_PER_LONG,
>> >> > - &riscv_intc_domain_ops, NULL);
>> >> > + /*
>> >> > + * Create INTC domain using a synthetic fwnode which will allow
>> >> > + * drivers (such as RISC-V SBI IPI driver, RISC-V timer driver,
>> >> > + * RISC-V PMU driver, etc) not having dedicated DT/ACPI node to
>> >> > + * directly create interrupt mapping for standard local interrupt
>> >> > + * numbers defined by the RISC-V privileged specification.
>> >> > + */
>> >> > + intc_domain = irq_domain_create_linear(riscv_intc_fwnode(),
>> >> > + BITS_PER_LONG,
>> >> > + &riscv_intc_domain_ops, NULL);
>> >>
>> >> This is what I'm talking about. It is simply broken. So either you
>> >> don't
>> >> need a per-CPU node (and the DT was bad the first place), or you
>> >> absolutely need
>> >> one (and the whole 'well-known/default domain' doesn't work at all).
>> >>
>> >> Either way, this patch is plain wrong.
>> >
>> > Okay, I will update this patch with the new approach which you
>> > suggested.
>>
>> But how do you plan to work around the fact that everything is
>> currently
>> build around having a node (and an irqdomain) per CPU? The PLIC, for
>> example,
>> clearly has one parent per CPU, not one global parent.
>>
>> I'm sure there was a good reason for this, and I suspect merging the
>> domains
>> will simply end up breaking things.
>
> We can have multiple PLIC instances in a platform and each PLIC
> instance targets a subset of CPUs. Further, each PLIC instance has
> multiple PLIC contexts for associated CPUs.
>
> The per-CPU INTC DT nodes and the "interrupts-extended" DT
> property of PLIC DT node helps us describe the association
> between various PLIC contexts and CPUs.
>
> Here's an example PLIC DT node:
>
> plic: interrupt-controller@...0000 {
> #address-cells = <0>;
> #interrupt-cells = <1>;
> compatible = "sifive,fu540-c000-plic", "sifive,plic-1.0.0";
> interrupt-controller;
> interrupts-extended = <&cpu0_intc 11>,
> <&cpu1_intc 11>, <&cpu1_intc 9>,
> <&cpu2_intc 11>, <&cpu2_intc 9>,
> <&cpu3_intc 11>, <&cpu3_intc 9>,
> <&cpu4_intc 11>, <&cpu4_intc 9>;
> reg = <0xc000000 0x4000000>;
> riscv,ndev = <10>;
> };
>
> In above above example, PLIC has 9 contexts and context
> to CPU connections are as follows:
> PLIC context0 => CPU0 M-mode external interrupt
> PLIC context1 => CPU1 M-mode external interrupt
> PLIC context2 => CPU1 S-mode external interrupt
> PLIC context3 => CPU2 M-mode external interrupt
> PLIC context4 => CPU2 S-mode external interrupt
> ....
Asymmetric interrupt routing. How lovely. How broken.
>
> This is just one example and we can describe any kind of
> PLIC context to CPU connections using "interrupts-extended"
> DT property.
>
> The same level of flexibility is provided by AIA interrupt
> controllers which are under development.
How promising. I really hope someone will eventually barge in
and clean this mess.
M.
--
Jazz is not dead. It just smells funny...
Powered by blists - more mailing lists