[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YfvKM0JVsmAd67OG@FVFF77S0Q05N>
Date: Thu, 3 Feb 2022 12:27:46 +0000
From: Mark Rutland <mark.rutland@....com>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: linux-arm-kernel@...ts.infradead.org, ardb@...nel.org,
catalin.marinas@....com, juri.lelli@...hat.com,
linux-kernel@...r.kernel.org, mingo@...hat.com,
peterz@...radead.org, will@...nel.org
Subject: Re: [PATCH 5/6] sched/preempt: add PREEMPT_DYNAMIC using static keys
On Thu, Feb 03, 2022 at 12:34:53PM +0100, Frederic Weisbecker wrote:
> On Thu, Feb 03, 2022 at 09:51:46AM +0000, Mark Rutland wrote:
> > On Thu, Feb 03, 2022 at 12:21:45AM +0100, Frederic Weisbecker wrote:
> > > > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > > index 78c351e35fec..7710b6593c72 100644
> > > > --- a/include/linux/sched.h
> > > > +++ b/include/linux/sched.h
> > > > @@ -2008,7 +2008,7 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
> > > > #if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
> > > > extern int __cond_resched(void);
> > > >
> > > > -#ifdef CONFIG_PREEMPT_DYNAMIC
> > > > +#if defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
> > > >
> > > > DECLARE_STATIC_CALL(cond_resched, __cond_resched);
> > > >
> > > > @@ -2017,6 +2017,14 @@ static __always_inline int _cond_resched(void)
> > > > return static_call_mod(cond_resched)();
> > > > }
> > > >
> > > > +#elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
> > > > +extern int dynamic_cond_resched(void);
> > > > +
> > > > +static __always_inline int _cond_resched(void)
> > > > +{
> > > > + return dynamic_cond_resched();
> > >
> > > So in the end this is creating an indirect call for every preemption entrypoint.
> >
> > Huh? "indirect call" usually means a branch to a function pointer, and I don't
> > think that's what you mean here. Do you just mean that we add a (direct)
> > call+return?
>
> Right, basic terminology and me...
No problem; just wanted to make sure we were talking about the same thing! :)
> > This gets inlined, and will be just a direct call to dynamic_cond_resched().
> > e,g. on arm64 this will be a single instruction:
> >
> > bl dynamic_cond_resched
> >
> > ... and (as the commit message desribes) then the implementation of
> > dynamic_cond_resched will be the same as the regular __cond_resched *but* the
> > static key trampoline is inlined at the start, e.g.
> >
> > | <dynamic_cond_resched>:
> > | bti c
> > | b <dynamic_cond_resched+0x10>
> > | mov w0, #0x0 // #0
> > | ret
> > | mrs x0, sp_el0
> > | ldr x0, [x0, #8]
> > | cbnz x0, <dynamic_cond_resched+0x8>
> > | paciasp
> > | stp x29, x30, [sp, #-16]!
> > | mov x29, sp
> > | bl <preempt_schedule_common>
> > | mov w0, #0x1 // #1
> > | ldp x29, x30, [sp], #16
> > | autiasp
> > | ret
> >
> > ... compared to the regular form of the function:
> >
> > | <__cond_resched>:
> > | bti c
> > | mrs x0, sp_el0
> > | ldr x1, [x0, #8]
> > | cbz x1, <__cond_resched+0x18>
> > | mov w0, #0x0 // #0
> > | ret
> > | paciasp
> > | stp x29, x30, [sp, #-16]!
> > | mov x29, sp
> > | bl <preempt_schedule_common>
> > | mov w0, #0x1 // #1
> > | ldp x29, x30, [sp], #16
> > | autiasp
> > | ret
>
> Who reads changelogs anyway? ;-)
>
> Ok I didn't know about that. Is this a guaranteed behaviour everywhere?
For anyone with static keys based on jump labels it should look roughly as
above. The *precise* codegen will depend on a bunch of details, but the whole
point of jump labels and static keys is to permit codegen like this.
> Perhaps put a big fat comment below HAVE_PREEMPT_DYNAMIC_KEY help to tell
> about this expectation as I guess it depends on arch/compiler?
Sure; I'll come up with something for v2.
Thanks,
Mark.
Powered by blists - more mailing lists