[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <871qwspbof.fsf@mpe.ellerman.id.au>
Date: Tue, 17 May 2022 22:48:32 +1000
From: Michael Ellerman <mpe@...erman.id.au>
To: Christophe Leroy <christophe.leroy@...roup.eu>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>
Cc: Christophe Leroy <christophe.leroy@...roup.eu>,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH] powerpc/irq: Remove arch_local_irq_restore() for
!CONFIG_CC_HAS_ASM_GOTO
Christophe Leroy <christophe.leroy@...roup.eu> writes:
> All supported versions of GCC support asm goto.
I thought clang was the one that only recently added support for asm
goto.
<looks>
Apparently clang added support in 2019, in clang 9. The earliest clang
we claim to support is 11.
So this patch is good, I'll just adjust the change log to say GCC/clang.
cheers
> diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
> index c768cde03e36..dd09919c3c66 100644
> --- a/arch/powerpc/kernel/irq.c
> +++ b/arch/powerpc/kernel/irq.c
> @@ -216,7 +216,6 @@ static inline void replay_soft_interrupts_irqrestore(void)
> #define replay_soft_interrupts_irqrestore() replay_soft_interrupts()
> #endif
>
> -#ifdef CONFIG_CC_HAS_ASM_GOTO
> notrace void arch_local_irq_restore(unsigned long mask)
> {
> unsigned char irq_happened;
> @@ -312,82 +311,6 @@ notrace void arch_local_irq_restore(unsigned long mask)
> __hard_irq_enable();
> preempt_enable();
> }
> -#else
> -notrace void arch_local_irq_restore(unsigned long mask)
> -{
> - unsigned char irq_happened;
> -
> - /* Write the new soft-enabled value */
> - irq_soft_mask_set(mask);
> - if (mask)
> - return;
> -
> - if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
> - WARN_ON_ONCE(in_nmi() || in_hardirq());
> -
> - /*
> - * From this point onward, we can take interrupts, preempt,
> - * etc... unless we got hard-disabled. We check if an event
> - * happened. If none happened, we know we can just return.
> - *
> - * We may have preempted before the check below, in which case
> - * we are checking the "new" CPU instead of the old one. This
> - * is only a problem if an event happened on the "old" CPU.
> - *
> - * External interrupt events will have caused interrupts to
> - * be hard-disabled, so there is no problem, we
> - * cannot have preempted.
> - */
> - irq_happened = get_irq_happened();
> - if (!irq_happened) {
> - if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
> - WARN_ON_ONCE(!(mfmsr() & MSR_EE));
> - return;
> - }
> -
> - /* We need to hard disable to replay. */
> - if (!(irq_happened & PACA_IRQ_HARD_DIS)) {
> - if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
> - WARN_ON_ONCE(!(mfmsr() & MSR_EE));
> - __hard_irq_disable();
> - local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
> - } else {
> - /*
> - * We should already be hard disabled here. We had bugs
> - * where that wasn't the case so let's dbl check it and
> - * warn if we are wrong. Only do that when IRQ tracing
> - * is enabled as mfmsr() can be costly.
> - */
> - if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) {
> - if (WARN_ON_ONCE(mfmsr() & MSR_EE))
> - __hard_irq_disable();
> - }
> -
> - if (irq_happened == PACA_IRQ_HARD_DIS) {
> - local_paca->irq_happened = 0;
> - __hard_irq_enable();
> - return;
> - }
> - }
> -
> - /*
> - * Disable preempt here, so that the below preempt_enable will
> - * perform resched if required (a replayed interrupt may set
> - * need_resched).
> - */
> - preempt_disable();
> - irq_soft_mask_set(IRQS_ALL_DISABLED);
> - trace_hardirqs_off();
> -
> - replay_soft_interrupts_irqrestore();
> - local_paca->irq_happened = 0;
> -
> - trace_hardirqs_on();
> - irq_soft_mask_set(IRQS_ENABLED);
> - __hard_irq_enable();
> - preempt_enable();
> -}
> -#endif
> EXPORT_SYMBOL(arch_local_irq_restore);
>
> /*
> --
> 2.35.1
Powered by blists - more mailing lists