[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86v8kfyn62.wl-maz@kernel.org>
Date: Mon, 06 Feb 2023 12:48:53 +0000
From: Marc Zyngier <maz@...nel.org>
To: Mason Huo <mason.huo@...rfivetech.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Palmer Dabbelt <palmer@...belt.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
<linux-kernel@...r.kernel.org>, <linux-riscv@...ts.infradead.org>,
Ley Foon Tan <leyfoon.tan@...rfivetech.com>,
Sia Jee Heng <jeeheng.sia@...rfivetech.com>
Subject: Re: [PATCH v1] irqchip/irq-sifive-plic: Add syscore callbacks for hibernation
On Mon, 06 Feb 2023 06:13:11 +0000,
Mason Huo <mason.huo@...rfivetech.com> wrote:
>
>
>
> On 2023/2/5 18:51, Marc Zyngier wrote:
> > On Fri, 13 Jan 2023 09:42:16 +0000,
> > Mason Huo <mason.huo@...rfivetech.com> wrote:
> >>
> >> The priority and enable registers of plic will be reset
> >> during hibernation power cycle in poweroff mode,
> >> add the syscore callbacks to save/restore those registers.
> >>
> >> Signed-off-by: Mason Huo <mason.huo@...rfivetech.com>
> >> Reviewed-by: Ley Foon Tan <leyfoon.tan@...rfivetech.com>
> >> Reviewed-by: Sia Jee Heng <jeeheng.sia@...rfivetech.com>
> >> ---
> >> drivers/irqchip/irq-sifive-plic.c | 93 ++++++++++++++++++++++++++++++-
> >> 1 file changed, 91 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
> >> index ff47bd0dec45..80306de45d2b 100644
> >> --- a/drivers/irqchip/irq-sifive-plic.c
> >> +++ b/drivers/irqchip/irq-sifive-plic.c
> >> @@ -17,6 +17,7 @@
> >> #include <linux/of_irq.h>
> >> #include <linux/platform_device.h>
> >> #include <linux/spinlock.h>
> >> +#include <linux/syscore_ops.h>
> >> #include <asm/smp.h>
> >>
> >> /*
> >> @@ -67,6 +68,8 @@ struct plic_priv {
> >> struct irq_domain *irqdomain;
> >> void __iomem *regs;
> >> unsigned long plic_quirks;
> >> + unsigned int nr_irqs;
> >> + u32 *priority_reg;
> >> };
> >>
> >> struct plic_handler {
> >> @@ -79,10 +82,13 @@ struct plic_handler {
> >> raw_spinlock_t enable_lock;
> >> void __iomem *enable_base;
> >> struct plic_priv *priv;
> >> + /* To record interrupts that are enabled before suspend. */
> >> + u32 enable_reg[MAX_DEVICES / 32];
> >
> > What does MAX_DEVICES represent here? How is it related to the number
> > of interrupts you're trying to save? It seems to be related to the
> > number of CPUs, so it hardly makes any sense so far.
> >
> The comment of this macro describes that "The largest number supported
> by devices marked as 'sifive,plic-1.0.0', is 1024, of which
> device 0 is defined as non-existent by the RISC-V Privileged Spec."
> As far as I understand, the *device* here means HW IRQ source,
> and the HW IRQ 0 is non-existent.
So why is it sized to that maximum value? The binding gives you the
*real* value that the HW implements.
>
> >> };
> >> static int plic_parent_irq __ro_after_init;
> >> static bool plic_cpuhp_setup_done __ro_after_init;
> >> static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
> >> +static struct plic_priv *priv_data;
> >>
> >> static int plic_irq_set_type(struct irq_data *d, unsigned int type);
> >>
> >> @@ -229,6 +235,78 @@ static int plic_irq_set_type(struct irq_data *d, unsigned int type)
> >> return IRQ_SET_MASK_OK;
> >> }
> >>
> >> +static void plic_irq_resume(void)
> >> +{
> >> + unsigned int i, cpu;
> >> + u32 __iomem *reg;
> >> +
> >> + for (i = 0; i < priv_data->nr_irqs; i++)
> >> + writel(priv_data->priority_reg[i],
> >> + priv_data->regs + PRIORITY_BASE + i * PRIORITY_PER_ID);
> >
> > From what I can tell, this driver uses exactly 2 priorities: 0 and 1.
> > And yet you use a full 32bit to encode those. Does it seem like a good
> > idea?
> >
> Yes, currently this driver uses oly 2 priorities.
> But, according to the sifive spec, the priority register is a 32bit register,
> and it supports 7 levels of priority.
And? This is a Linux driver, not an implementation validation
tool. What is the point of saving/restoring stuff that is *never*
used? :-(
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists