[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ecre32dl.ffs@tglx>
Date: Tue, 07 Oct 2025 22:04:06 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Radu Rendec <rrendec@...hat.com>, Manivannan Sadhasivam <mani@...nel.org>
Cc: Daniel Tsai <danielsftsai@...gle.com>, Marek Behún
<kabel@...nel.org>,
Krishna Chaitanya Chundru <quic_krichai@...cinc.com>, Bjorn Helgaas
<bhelgaas@...gle.com>, Rob Herring <robh@...nel.org>, Krzysztof
Wilczyński
<kwilczynski@...nel.org>, Lorenzo Pieralisi <lpieralisi@...nel.org>,
Jingoo Han <jingoohan1@...il.com>, Brian Masney <bmasney@...hat.com>, Eric
Chanudet <echanude@...hat.com>, Alessandro Carminati
<acarmina@...hat.com>, Jared Kangas <jkangas@...hat.com>,
linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/3] genirq: Add interrupt redirection infrastructure
On Mon, Oct 06 2025 at 18:38, Radu Rendec wrote:
> +
> +static bool demux_redirect_remote(struct irq_desc *desc)
> +{
> +#ifdef CONFIG_SMP
Please have a function and a stub for the !SMP case.
> + const struct cpumask *m = irq_data_get_effective_affinity_mask(&desc->irq_data);
> + unsigned int target_cpu = READ_ONCE(desc->redirect.fallback_cpu);
what means fallback_cpu in this context? That's confusing at best.
> + if (!cpumask_test_cpu(smp_processor_id(), m)) {
Why cpumask_test?
if (target != smp_processor_id()
should be good enough and understandable :)
> + /* Protect against shutdown */
> + if (desc->action)
> + irq_work_queue_on(&desc->redirect.work, target_cpu);
Can you please document why this is correct vs. CPU hotplug (especially unplug)?
I think it is, but I didn't look too carefully.
> +/**
> + * generic_handle_demux_domain_irq - Invoke the handler for a hardware interrupt
> + * of a demultiplexing domain.
> + * @domain: The domain where to perform the lookup
> + * @hwirq: The hardware interrupt number to convert to a logical one
> + *
> + * Returns: True on success, or false if lookup has failed
> + */
> +bool generic_handle_demux_domain_irq(struct irq_domain *domain, unsigned int hwirq)
> +{
> + struct irq_desc *desc = irq_resolve_mapping(domain, hwirq);
> +
> + if (unlikely(!desc))
> + return false;
> +
> + scoped_guard(raw_spinlock, &desc->lock) {
> + if (desc->irq_data.chip->irq_pre_redirect)
> + desc->irq_data.chip->irq_pre_redirect(&desc->irq_data);
I'd rather see that in the redirect function aboive.
> + if (demux_redirect_remote(desc))
> + return true;
> + }
> + return !handle_irq_desc(desc);
> +}
> +EXPORT_SYMBOL_GPL(generic_handle_demux_domain_irq);
> +
> #endif
>
> /* Dynamic interrupt handling */
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index c94837382037e..ed8f8b2667b0b 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -35,6 +35,16 @@ static int __init setup_forced_irqthreads(char *arg)
> early_param("threadirqs", setup_forced_irqthreads);
> #endif
>
> +#ifdef CONFIG_SMP
> +static inline void synchronize_irqwork(struct irq_desc *desc)
> +{
> + /* Synchronize pending or on the fly redirect work */
> + irq_work_sync(&desc->redirect.work);
> +}
> +#else
> +static inline void synchronize_irqwork(struct irq_desc *desc) { }
> +#endif
> +
> static int __irq_get_irqchip_state(struct irq_data *d, enum irqchip_irq_state which, bool *state);
>
> static void __synchronize_hardirq(struct irq_desc *desc, bool sync_chip)
> @@ -43,6 +53,8 @@ static void __synchronize_hardirq(struct irq_desc *desc, bool sync_chip)
> bool inprogress;
>
> do {
> + synchronize_irqwork(desc);
That can't work. irq_work_sync() requires interrupts and preemption
enabled. But __synchronize_hardirq() can be invoked from interrupt or
preemption disabled context.
And it's not required at all beacuse that's not any different from a
hardware device interrupt. Either it is already handled (IRQ_INPROGRESS)
on some other CPU or not. That code can't anticipiate that there is a
interrupt somewhere on the flight in the system and not yet raised and
handled in a CPU.
Though you want to invoke it in __synchronize_irq() _before_ invoking
__synchronize_hardirq().
Thanks,
tglx
Powered by blists - more mailing lists