[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b691a46e-7461-89c8-c760-a1ef9769091f@gmail.com>
Date: Tue, 19 May 2020 12:47:24 -0700
From: Florian Fainelli <f.fainelli@...il.com>
To: Marc Zyngier <maz@...nel.org>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Cc: Sumit Garg <sumit.garg@...aro.org>, kernel-team@...roid.com,
Russell King <linux@....linux.org.uk>,
Jason Cooper <jason@...edaemon.net>,
Catalin Marinas <catalin.marinas@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>
Subject: Re: [PATCH 01/11] genirq: Add fasteoi IPI flow
On 5/19/2020 9:17 AM, Marc Zyngier wrote:
> For irqchips using the fasteoi flow, IPIs are a bit special.
>
> They need to be EOId early (before calling the handler), as
> funny things may happen in the handler (they do not necessarily
> behave like a normal interrupt), and that the arch code is
> already handling the stats.
>
> Signed-off-by: Marc Zyngier <maz@...nel.org>
> ---
> include/linux/irq.h | 1 +
> kernel/irq/chip.c | 26 ++++++++++++++++++++++++++
> 2 files changed, 27 insertions(+)
>
> diff --git a/include/linux/irq.h b/include/linux/irq.h
> index 8d5bc2c237d7..726f94d8b8cc 100644
> --- a/include/linux/irq.h
> +++ b/include/linux/irq.h
> @@ -621,6 +621,7 @@ static inline int irq_set_parent(int irq, int parent_irq)
> */
> extern void handle_level_irq(struct irq_desc *desc);
> extern void handle_fasteoi_irq(struct irq_desc *desc);
> +extern void handle_percpu_devid_fasteoi_ipi(struct irq_desc *desc);
> extern void handle_edge_irq(struct irq_desc *desc);
> extern void handle_edge_eoi_irq(struct irq_desc *desc);
> extern void handle_simple_irq(struct irq_desc *desc);
> diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
> index 41e7e37a0928..7b0b789cfed4 100644
> --- a/kernel/irq/chip.c
> +++ b/kernel/irq/chip.c
> @@ -955,6 +955,32 @@ void handle_percpu_devid_irq(struct irq_desc *desc)
> chip->irq_eoi(&desc->irq_data);
> }
>
> +/**
> + * handle_percpu_devid_fasteoi_ipi - Per CPU local IPI handler with per cpu
> + * dev ids
> + * @desc: the interrupt description structure for this irq
> + *
> + * The biggest differences with the IRQ version are that:
> + * - the interrupt is EOIed early, as the IPI could result in a context
> + * switch, and we need to make sure the IPI can fire again
> + * - Stats are usually handled at the architecture level, so we ignore them
> + * here
> + */
> +void handle_percpu_devid_fasteoi_ipi(struct irq_desc *desc)
> +{
> + struct irq_chip *chip = irq_desc_get_chip(desc);
> + struct irqaction *action = desc->action;
> + unsigned int irq = irq_desc_get_irq(desc);
> + irqreturn_t res;
Should not this have a:
if (!irq_settings_is_no_accounting(desc))
__kstat_incr_irqs_this_cpu(desc);
here in case you are using that handler with a SGI interrupt which is
not used as an IPI?
> +
> + if (chip->irq_eoi)
> + chip->irq_eoi(&desc->irq_data);
> +
> + trace_irq_handler_entry(irq, action);
> + res = action->handler(irq, raw_cpu_ptr(action->percpu_dev_id));
> + trace_irq_handler_exit(irq, action, res);
> +}
> +
> /**
> * handle_percpu_devid_fasteoi_nmi - Per CPU local NMI handler with per cpu
> * dev ids
>
--
Florian
Powered by blists - more mailing lists