[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d169e57d-d0a8-4fe8-a44e-2f7a967b5121@csgroup.eu>
Date: Tue, 26 Nov 2024 11:48:54 +0100
From: Christophe Leroy <christophe.leroy@...roup.eu>
To: Shrikanth Hegde <sshegde@...ux.ibm.com>, mpe@...erman.id.au,
linuxppc-dev@...ts.ozlabs.org
Cc: npiggin@...il.com, maddy@...ux.ibm.com, bigeasy@...utronix.de,
ankur.a.arora@...cle.com, linux-kernel@...r.kernel.org,
mark.rutland@....com, vschneid@...hat.com, peterz@...radead.org
Subject: Re: [PATCH 2/3] powerpc: support dynamic preemption
Le 25/11/2024 à 05:22, Shrikanth Hegde a écrit :
> Once the lazy preemption is supported, it would be desirable to change
> the preemption models at runtime. So this change adds support for dynamic
> preemption using DYNAMIC_KEY.
>
> In irq-exit to kernel path, use preempt_model_preemptible for decision.
> Other way would be using static key based decision. Keeping it
> simpler since key based change didn't show performance improvement.
>
> Tested lightly on Power10 LPAR. Performance numbers indicate that,
> preempt=none(no dynamic) and preempt=none(dynamic) are similar.
> Only hackbench pipe shows a regression. There is slight overhead of code
> check if it is preemptible kernel. hackbench pipe is prone to such
> patterns[1]
>
> cat /sys/kernel/debug/sched/preempt
> (none) voluntary full lazy
> perf stat -e probe:__cond_resched -a sleep 1
> Performance counter stats for 'system wide':
> 1,253 probe:__cond_resched
>
> echo full > /sys/kernel/debug/sched/preempt
> cat /sys/kernel/debug/sched/preempt
> none voluntary (full) lazy
> perf stat -e probe:__cond_resched -a sleep 1
> Performance counter stats for 'system wide':
> 0 probe:__cond_resched
>
> echo lazy > /sys/kernel/debug/sched/preempt
> cat /sys/kernel/debug/sched/preempt
> none voluntary full (lazy)
> perf stat -e probe:__cond_resched -a sleep 1
> Performance counter stats for 'system wide':
> 0 probe:__cond_resched
>
> [1]: https://lore.kernel.org/all/1a973dda-c79e-4d95-935b-e4b93eb077b8@linux.ibm.com/
>
> Signed-off-by: Shrikanth Hegde <sshegde@...ux.ibm.com>
> ---
> arch/powerpc/Kconfig | 1 +
> arch/powerpc/include/asm/preempt.h | 1 +
> arch/powerpc/kernel/interrupt.c | 6 +++++-
> arch/powerpc/lib/vmx-helper.c | 2 +-
> 4 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 6d6bbd93abab..01c58f5258c9 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -270,6 +270,7 @@ config PPC
> select HAVE_PERF_EVENTS_NMI if PPC64
> select HAVE_PERF_REGS
> select HAVE_PERF_USER_STACK_DUMP
> + select HAVE_PREEMPT_DYNAMIC_KEY
> select HAVE_RETHOOK if KPROBES
> select HAVE_REGS_AND_STACK_ACCESS_API
> select HAVE_RELIABLE_STACKTRACE
> diff --git a/arch/powerpc/include/asm/preempt.h b/arch/powerpc/include/asm/preempt.h
> index 51f8f3881523..c0a19ff3f78c 100644
> --- a/arch/powerpc/include/asm/preempt.h
> +++ b/arch/powerpc/include/asm/preempt.h
> @@ -84,6 +84,7 @@ extern asmlinkage void preempt_schedule_notrace(void);
>
> #if defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
>
> +DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
> void dynamic_preempt_schedule(void);
> void dynamic_preempt_schedule_notrace(void);
> #define __preempt_schedule() dynamic_preempt_schedule()
> diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
> index 8f4acc55407b..0fb01019d7e0 100644
> --- a/arch/powerpc/kernel/interrupt.c
> +++ b/arch/powerpc/kernel/interrupt.c
> @@ -38,6 +38,10 @@ static inline bool exit_must_hard_disable(void)
> }
> #endif
>
> +#ifdef CONFIG_PREEMPT_DYNAMIC
> +DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
> +#endif
Why is that needed at all ? It isn't used.
> +
> /*
> * local irqs must be disabled. Returns false if the caller must re-enable
> * them, check for new work, and try again.
> @@ -396,7 +400,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs)
> /* Returning to a kernel context with local irqs enabled. */
> WARN_ON_ONCE(!(regs->msr & MSR_EE));
> again:
> - if (IS_ENABLED(CONFIG_PREEMPTION)) {
> + if (preempt_model_preemptible()) {
> /* Return to preemptible kernel context */
> if (unlikely(read_thread_flags() & _TIF_NEED_RESCHED)) {
> if (preempt_count() == 0)
> diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
> index 58ed6bd613a6..7b069c832ce2 100644
> --- a/arch/powerpc/lib/vmx-helper.c
> +++ b/arch/powerpc/lib/vmx-helper.c
> @@ -45,7 +45,7 @@ int exit_vmx_usercopy(void)
> * set and we are preemptible. The hack here is to schedule a
> * decrementer to fire here and reschedule for us if necessary.
> */
> - if (IS_ENABLED(CONFIG_PREEMPTION) && need_resched())
> + if (preempt_model_preemptible() && need_resched())
> set_dec(1);
> return 0;
> }
Powered by blists - more mailing lists