[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAhV-H4hdCXwZ=x2_8d1dMZmO8c7XBnzovuVDhrSQ_cW0abmEg@mail.gmail.com>
Date: Mon, 10 Feb 2025 10:19:27 +0800
From: Huacai Chen <chenhuacai@...nel.org>
To: Marco Crivellari <marco.crivellari@...e.com>
Cc: loongarch@...ts.linux.dev, linux-kernel@...r.kernel.org,
WANG Xuerui <kernel@...0n.name>, Frederic Weisbecker <frederic@...nel.org>,
Jinyang He <hejinyang@...ngson.cn>, Tiezhu Yang <yangtiezhu@...ngson.cn>,
Jiaxun Yang <jiaxun.yang@...goat.com>, Bibo Mao <maobibo@...ngson.cn>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>
Subject: Re: [PATCH v2 1/1] [PATCH] loongson: Fix idle VS timer enqueue
Hi, Marco,
Don't care about the lkp robot, I have queued this patch with some
small modifications.
https://github.com/chenhuacai/linux/commit/a8aa673ea46c03b3f62992ffa4ffe810ac84f6e3
Huacai
On Sat, Feb 8, 2025 at 6:19 PM Marco Crivellari
<marco.crivellari@...e.com> wrote:
>
> Loongson re-enables interrupts on its idle routine and performs a
> TIF_NEED_RESCHED check afterwards before putting the CPU to sleep.
>
> The IRQs firing between the check and the idling instruction may set the
> TIF_NEED_RESCHED flag. In order to deal with the such a race, IRQs
> interrupting __arch_cpu_idle() rollback their return address to the
> beginning of __arch_cpu_idle() so that TIF_NEED_RESCHED is checked
> again before going back to sleep.
>
> However idle IRQs can also queue timers that may require a tick
> reprogramming through a new generic idle loop iteration but those timers
> would go unnoticed here because __arch_cpu_idle() only checks
> TIF_NEED_RESCHED. It doesn't check for pending timers.
>
> Fix this with fast-forwarding idle IRQs return value to the end of the
> idle routine instead of the beginning, so that the generic idle loop
> handles both TIF_NEED_RESCHED and pending timers.
>
> Fixes: 0603839b18f4 ("LoongArch: Add exception/interrupt handling")
> Cc: WANG Xuerui <kernel@...0n.name>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
> Signed-off-by: Marco Crivellari <marco.crivellari@...e.com>
> ---
> arch/loongarch/kernel/genex.S | 31 +++++++++++++++++--------------
> arch/loongarch/kernel/idle.c | 3 +--
> arch/loongarch/kernel/reset.c | 18 +++++++++---------
> 3 files changed, 27 insertions(+), 25 deletions(-)
>
> diff --git a/arch/loongarch/kernel/genex.S b/arch/loongarch/kernel/genex.S
> index 86d5d90ebefe..9623298a5cf1 100644
> --- a/arch/loongarch/kernel/genex.S
> +++ b/arch/loongarch/kernel/genex.S
> @@ -18,28 +18,31 @@
>
> .align 5
> SYM_FUNC_START(__arch_cpu_idle)
> - /* start of rollback region */
> - LONG_L t0, tp, TI_FLAGS
> - nop
> - andi t0, t0, _TIF_NEED_RESCHED
> - bnez t0, 1f
> - nop
> - nop
> - nop
> + /* start of idle interrupt region */
> + ori t0, zero, CSR_CRMD_IE
> + /* idle instruction needs irq enabled */
> + csrxchg t0, t0, LOONGARCH_CSR_CRMD
> + /*
> + * If an interrupt lands here; between enabling interrupts above and
> + * going idle on the next instruction, we must *NOT* go idle since the
> + * interrupt could have set TIF_NEED_RESCHED or caused an timer to need
> + * reprogramming. Fall through -- see handle_vint() below -- and have
> + * the idle loop take care of things.
> + */
> idle 0
> - /* end of rollback region */
> -1: jr ra
> + /* end of idle interrupt region */
> +SYM_INNER_LABEL(__arch_cpu_idle_exit, SYM_L_LOCAL)
> + jr ra
> SYM_FUNC_END(__arch_cpu_idle)
>
> SYM_CODE_START(handle_vint)
> UNWIND_HINT_UNDEFINED
> BACKUP_T0T1
> SAVE_ALL
> - la_abs t1, __arch_cpu_idle
> + la_abs t1, __arch_cpu_idle_exit
> LONG_L t0, sp, PT_ERA
> - /* 32 byte rollback region */
> - ori t0, t0, 0x1f
> - xori t0, t0, 0x1f
> + /* 3 instructions idle interrupt region */
> + ori t0, t0, 0x0c
> bne t0, t1, 1f
> LONG_S t0, sp, PT_ERA
> 1: move a0, sp
> diff --git a/arch/loongarch/kernel/idle.c b/arch/loongarch/kernel/idle.c
> index 0b5dd2faeb90..54b247d8cdb6 100644
> --- a/arch/loongarch/kernel/idle.c
> +++ b/arch/loongarch/kernel/idle.c
> @@ -11,7 +11,6 @@
>
> void __cpuidle arch_cpu_idle(void)
> {
> - raw_local_irq_enable();
> - __arch_cpu_idle(); /* idle instruction needs irq enabled */
> + __arch_cpu_idle();
> raw_local_irq_disable();
> }
> diff --git a/arch/loongarch/kernel/reset.c b/arch/loongarch/kernel/reset.c
> index 1ef8c6383535..8fd8c44b02cb 100644
> --- a/arch/loongarch/kernel/reset.c
> +++ b/arch/loongarch/kernel/reset.c
> @@ -32,9 +32,9 @@ void machine_halt(void)
> pr_notice("\n\n** You can safely turn off the power now **\n\n");
> console_flush_on_panic(CONSOLE_FLUSH_PENDING);
>
> - while (true) {
> - __arch_cpu_idle();
> - }
> + while (1) {
> + asm volatile("idle 0" : : : "memory");
> + };
> }
>
> void machine_power_off(void)
> @@ -52,9 +52,9 @@ void machine_power_off(void)
> efi.reset_system(EFI_RESET_SHUTDOWN, EFI_SUCCESS, 0, NULL);
> #endif
>
> - while (true) {
> - __arch_cpu_idle();
> - }
> + while (1) {
> + asm volatile("idle 0" : : : "memory");
> + };
> }
>
> void machine_restart(char *command)
> @@ -73,7 +73,7 @@ void machine_restart(char *command)
> if (!acpi_disabled)
> acpi_reboot();
>
> - while (true) {
> - __arch_cpu_idle();
> - }
> + while (1) {
> + asm volatile("idle 0" : : : "memory");
> + };
> }
> --
> 2.48.1
>
Powered by blists - more mailing lists