[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <901aa9a48ef02eeec73dedf051dd0b14436ac22f.camel@fb.com>
Date: Sat, 7 May 2022 19:18:51 +0000
From: Rik van Riel <riel@...com>
To: Song Liu <songliubraving@...com>
CC: "song@...nel.org" <song@...nel.org>,
"joe.lawrence@...hat.com" <joe.lawrence@...hat.com>,
"jpoimboe@...hat.com" <jpoimboe@...hat.com>,
"peterz@...radead.org" <peterz@...radead.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>,
"live-patching@...r.kernel.org" <live-patching@...r.kernel.org>,
"vincent.guittot@...aro.org" <vincent.guittot@...aro.org>
Subject: Re: [RFC] sched,livepatch: call klp_try_switch_task in __cond_resched
On Sat, 2022-05-07 at 19:04 +0000, Song Liu wrote:
> > On May 7, 2022, at 11:26 AM, Rik van Riel <riel@...com> wrote:
> >
> > On Sat, 2022-05-07 at 10:46 -0700, Song Liu wrote:
> > > Busy kernel threads may block the transition of livepatch. Call
> > > klp_try_switch_task from __cond_resched to make the transition
> > > easier.
> > >
> > That seems like a useful idea given what we're seeing on
> > some systems, but I do have a nitpick with your patch :)
> >
> > > +++ b/kernel/sched/core.c
> > > @@ -6990,6 +6990,9 @@ SYSCALL_DEFINE0(sched_yield)
> > > #if !defined(CONFIG_PREEMPTION) ||
> > > defined(CONFIG_PREEMPT_DYNAMIC)
> > > int __sched __cond_resched(void)
> > > {
> > > + if (unlikely(klp_patch_pending(current)))
> > > + klp_try_switch_task(current);
> > > +
> > > if (should_resched(0)) {
> > > preempt_schedule_common();
> > > return 1;
> >
> > While should_resched and klp_patch_pending check the same
> > cache line (task->flags), now there are two separate
> > conditionals on this.
> >
> > Would it make sense to fold the tests for TIF_NEED_RESCHED
> > and TIF_PATCH_PENDING int should_resched(), and then re-do
> > the test for TIF_PATCH_PENDING only if should_resched()
> > returns true?
>
> x86 has a different version of should_resched(),
Huh, I just looked at that, and the x86 should_resched()
only seems to check that we _can_ resched, not whether
we should...
/*
* Returns true when we need to resched and can (barring IRQ state).
*/
static __always_inline bool should_resched(int preempt_offset)
{
return unlikely(raw_cpu_read_4(__preempt_count) ==
preempt_offset);
}
I wonder if that was intended, and why, or whether
the x86 should_resched should also be checking for
TIF_NEED_RESCHED?
If the latter, the check for TIF_PATCH_PENDING could
just be merged there, too. Probably in the same macro
called from both places.
> so I am not
> quite sure what’s the right way o modify shhould_resched().
> OTOH, we can probably see should_resched() as-is and just
> move klp_patch_pending, like
>
> int __sched __cond_resched(void)
> {
> if (should_resched(0)) {
> if (unlikely(klp_patch_pending(current)))
> klp_try_switch_task(current);
>
> preempt_schedule_common();
> return 1;
> }
> #ifndef CONFIG_PREEMPT_RCU
> rcu_all_qs();
> #endif
> return 0;
> }
>
> Given live patch user space usually waits for many seconds,
> I guess this should work?
That should certainly work on x86, where should_resched
seems to always return true when we can reschedule?
Powered by blists - more mailing lists