[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7d91fa2a-57c5-6c78-8e2d-7fbdd6a11cba@loongson.cn>
Date: Sun, 23 Apr 2023 21:52:49 +0800
From: "bibo, mao" <maobibo@...ngson.cn>
To: Peter Zijlstra <peterz@...radead.org>,
Frederic Weisbecker <frederic@...nel.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Huacai Chen <chenhuacai@...nel.org>,
WANG Xuerui <kernel@...0n.name>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Anna-Maria Behnsen <anna-maria@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: Loongson (and other $ARCHs?) idle VS timer enqueue
在 2023/4/22 23:04, Peter Zijlstra 写道:
> On Sat, Apr 22, 2023 at 04:21:45PM +0200, Frederic Weisbecker wrote:
>> On Sat, Apr 22, 2023 at 10:17:00AM +0200, Peter Zijlstra wrote:
>>> diff --git a/arch/loongarch/kernel/genex.S b/arch/loongarch/kernel/genex.S
>>> index 44ff1ff64260..5a102ff80de0 100644
>>> --- a/arch/loongarch/kernel/genex.S
>>> +++ b/arch/loongarch/kernel/genex.S
>>> @@ -40,6 +40,7 @@ SYM_FUNC_START(handle_vint)
>>> ori t0, t0, 0x1f
>>> xori t0, t0, 0x1f
>>> bne t0, t1, 1f
>>> + addi.d t0, t0, 0x20
>>> LONG_S t0, sp, PT_ERA
>>> 1: move a0, sp
>>> move a1, sp
>>
>> But the interrupts are enabled in C from arch_cpu_idle(), which
>> only then calls the ASM __arch_cpu_idle(). So if the interrupt happens
>> somewhere in between the call, the rollback (or fast-forward now)
>> doesn't apply.
I do not know much details about scheduler and timer, if the interrupt
happens between the call, will flag _TIF_NEED_RESCHED be set? If it is
set, the rollback will still apply.
>>
>> I guess interrupts need to be re-enabled from ASM in the beginning
>> of __arch_cpu_idle() so that it's part of the fast-forward region.
>
> Right; something like so I suppose, but at this point I'm really just
> guessing... Loongarch person will have to do.
>
> diff --git a/arch/loongarch/kernel/genex.S b/arch/loongarch/kernel/genex.S
> index 44ff1ff64260..4814ac5334ef 100644
> --- a/arch/loongarch/kernel/genex.S
> +++ b/arch/loongarch/kernel/genex.S
> @@ -19,13 +19,13 @@
> .align 5
> SYM_FUNC_START(__arch_cpu_idle)
> /* start of rollback region */
> + move t0, CSR_CRMD_IE
> + csrxchg t0, t0, LOONGARCH_CSR_CRMD
> LONG_L t0, tp, TI_FLAGS
> nop
> andi t0, t0, _TIF_NEED_RESCHED
> bnez t0, 1f
> nop
> - nop
> - nop
> idle 0
> /* end of rollback region */
> 1: jr ra
> @@ -40,6 +40,7 @@ SYM_FUNC_START(handle_vint)
> ori t0, t0, 0x1f
> xori t0, t0, 0x1f
> bne t0, t1, 1f
> + addi.d t0, t0, 0x20
It is more reasonable with this patch, this will jump out of idle
function directly after interrupt returns. If so, can we remove checking
_TIF_NEED_RESCHED in idle ASM function?
> + move t0, CSR_CRMD_IE
> + csrxchg t0, t0, LOONGARCH_CSR_CRMD
- LONG_L t0, tp, TI_FLAGS
+ nop
> nop
- andi t0, t0, _TIF_NEED_RESCHED
- bnez t0, 1f
+ nop
+ nop
> nop
> - nop
> - nop
> idle 0
Regards
Bibo, Mao
> LONG_S t0, sp, PT_ERA
> 1: move a0, sp
> move a1, sp
> diff --git a/arch/loongarch/kernel/idle.c b/arch/loongarch/kernel/idle.c
> index 0b5dd2faeb90..5ba72d229920 100644
> --- a/arch/loongarch/kernel/idle.c
> +++ b/arch/loongarch/kernel/idle.c
> @@ -11,7 +11,6 @@
>
> void __cpuidle arch_cpu_idle(void)
> {
> - raw_local_irq_enable();
> __arch_cpu_idle(); /* idle instruction needs irq enabled */
> raw_local_irq_disable();
> }
Powered by blists - more mailing lists