[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230130194823.6y3rc227bvsgele4@treble>
Date: Mon, 30 Jan 2023 11:48:23 -0800
From: Josh Poimboeuf <jpoimboe@...nel.org>
To: Mark Rutland <mark.rutland@....com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Petr Mladek <pmladek@...e.com>,
Joe Lawrence <joe.lawrence@...hat.com>, kvm@...r.kernel.org,
"Michael S. Tsirkin" <mst@...hat.com>, netdev@...r.kernel.org,
Jiri Kosina <jikos@...nel.org>, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
"Seth Forshee (DigitalOcean)" <sforshee@...italocean.com>,
live-patching@...r.kernel.org, Miroslav Benes <mbenes@...e.cz>
Subject: Re: [PATCH 0/2] vhost: improve livepatch switching for heavily
loaded vhost worker kthreads
On Mon, Jan 30, 2023 at 06:36:32PM +0000, Mark Rutland wrote:
> On Mon, Jan 30, 2023 at 01:40:18PM +0100, Peter Zijlstra wrote:
> > On Fri, Jan 27, 2023 at 02:11:31PM -0800, Josh Poimboeuf wrote:
> > > @@ -8500,8 +8502,10 @@ EXPORT_STATIC_CALL_TRAMP(might_resched);
> > > static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
> > > int __sched dynamic_cond_resched(void)
> > > {
> > > - if (!static_branch_unlikely(&sk_dynamic_cond_resched))
> > > + if (!static_branch_unlikely(&sk_dynamic_cond_resched)) {
> > > + klp_sched_try_switch();
> > > return 0;
> > > + }
> > > return __cond_resched();
> > > }
> > > EXPORT_SYMBOL(dynamic_cond_resched);
> >
> > I would make the klp_sched_try_switch() not depend on
> > sk_dynamic_cond_resched, because __cond_resched() is not a guaranteed
> > pass through __schedule().
> >
> > But you'll probably want to check with Mark here, this all might
> > generate crap code on arm64.
>
> IIUC here klp_sched_try_switch() is a static call, so on arm64 this'll generate
> at least a load, a conditional branch, and an indirect branch. That's not
> ideal, but I'd have to benchmark it to find out whether it's a significant
> overhead relative to the baseline of PREEMPT_DYNAMIC.
>
> For arm64 it'd be a bit nicer to have another static key check, and a call to
> __klp_sched_try_switch(). That way the static key check gets turned into a NOP
> in the common case, and the call to __klp_sched_try_switch() can be a direct
> call (potentially a tail-call if we made it return 0).
Hm, it might be nice if our out-of-line static call implementation would
automatically do a static key check as part of static_call_cond() for
NULL-type static calls.
But the best answer is probably to just add inline static calls to
arm64. Is the lack of objtool the only thing blocking that?
Objtool is now modular, so all the controversial CFG reverse engineering
is now optional, so it shouldn't be too hard to just enable objtool for
static call inlines.
--
Josh
Powered by blists - more mailing lists