[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211022174011.GI86184@C02TD0UTHF1T.local>
Date: Fri, 22 Oct 2021 18:40:11 +0100
From: Mark Rutland <mark.rutland@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: keescook@...omium.org, x86@...nel.org,
linux-kernel@...r.kernel.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, akpm@...ux-foundation.org,
zhengqi.arch@...edance.com, linux@...linux.org.uk,
catalin.marinas@....com, will@...nel.org, mpe@...erman.id.au,
paul.walmsley@...ive.com, palmer@...belt.com, hca@...ux.ibm.com,
gor@...ux.ibm.com, borntraeger@...ibm.com,
linux-arch@...r.kernel.org, ardb@...nel.org
Subject: Re: [PATCH 5/7] powerpc, arm64: Mark __switch_to() as __sched
On Fri, Oct 22, 2021 at 05:09:38PM +0200, Peter Zijlstra wrote:
> Unlike most of the other architectures, PowerPC and ARM64 have
> __switch_to() as a C function which remains on the stack.
For clarity, could we say:
Unlike most of the other architectures, PowerPC and ARM64 have
__switch_to() as a C function which is visible when unwinding from
their assembly switch function.
... since both arm64 and powerpc are branch-and-link architectures, and
this isn't stacked; it's in the GPR context saved by the switch
assembly.
> Their
> respective __get_wchan() skips one stack frame unconditionally,
> without testing is_sched_functions().
and similarly s/stack frame/caller/ here.
>
> Mark them __sched such that we can forgo that special case.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> arch/arm64/kernel/process.c | 4 ++--
> arch/powerpc/kernel/process.c | 4 ++--
> 2 files changed, 4 insertions(+), 4 deletions(-)
>
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -490,8 +490,8 @@ void update_sctlr_el1(u64 sctlr)
> /*
> * Thread switching.
> */
> -__notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
> - struct task_struct *next)
> +__notrace_funcgraph __sched
> +struct task_struct *__switch_to(struct task_struct *prev, struct task_struct *next)
As this only matters for the call to our cpu_switch_to() assembly, this
looks sufficient to me. This only changes the placement of the function
and doesn't affect the existing tracing restrictions, so I don't think
this should have any adverse effect.
For testing, this doesn't adversly affect the existing unwinder (which
should obviously be true since we skip the entry anyway, but hey..).
Regardless of the commit message wording:
Reviewed-by: Mark Rutland <mark.rutland@....com> [arm64]
Tested-by: Mark Rutland <mark.rutland@....com> [arm64]
Thanks,
Mark.
> {
> struct task_struct *last;
>
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -1201,8 +1201,8 @@ static inline void restore_sprs(struct t
>
> }
>
> -struct task_struct *__switch_to(struct task_struct *prev,
> - struct task_struct *new)
> +__sched struct task_struct *__switch_to(struct task_struct *prev,
> + struct task_struct *new)
> {
> struct thread_struct *new_thread, *old_thread;
> struct task_struct *last;
>
>
Powered by blists - more mailing lists