[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YVxbP5fFLY10mqhy@hirez.programming.kicks-ass.net>
Date: Tue, 5 Oct 2021 16:03:43 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Petr Mladek <pmladek@...e.com>
Cc: gor@...ux.ibm.com, jpoimboe@...hat.com, jikos@...nel.org,
mbenes@...e.cz, mingo@...nel.org, linux-kernel@...r.kernel.org,
joe.lawrence@...hat.com, fweisbec@...il.com, tglx@...utronix.de,
hca@...ux.ibm.com, svens@...ux.ibm.com, sumanthk@...ux.ibm.com,
live-patching@...r.kernel.org, paulmck@...nel.org,
rostedt@...dmis.org, x86@...nel.org
Subject: Re: [PATCH v2 03/11] sched,livepatch: Use task_call_func()
On Tue, Oct 05, 2021 at 01:40:24PM +0200, Petr Mladek wrote:
> On Wed 2021-09-29 17:17:26, Peter Zijlstra wrote:
> > Instead of frobbing around with scheduler internals, use the shiny new
> > task_call_func() interface.
> >
> > --- a/kernel/livepatch/transition.c
> > +++ b/kernel/livepatch/transition.c
> > @@ -274,6 +266,22 @@ static int klp_check_stack(struct task_s
> > return 0;
> > }
> >
> > +static int klp_check_and_switch_task(struct task_struct *task, void *arg)
> > +{
> > + int ret;
> > +
> > + if (task_curr(task))
>
> This must be
>
> if (task_curr(task) && task != current)
>
> , otherwise the task is not able to migrate itself. The condition was
> lost when reshuffling the original code, see below.
Urgh, yeah, I misread that and figued task_curr() should already capture
current, but the extra clause excludes current :/
> JFYI, I have missed it during review. I am actually surprised that the
> process could check its own stack reliably. But it seems to work.
Ah, unwinding yourself is actually the only sane option ;-)
> > - rq = task_rq_lock(task, &flags);
> > + ret = task_call_func(task, klp_check_and_switch_task, &old_name);
>
> It looks correct. JFYI, this is why:
>
> The logic seems to be exactly the same, except for the one fallout
> mentioned above. So the only problem might be races.
>
> The only important thing is that the task must not be running on any CPU
> when klp_check_stack(task, arg) is called. By other word, the stack
> must stay the same when being checked.
>
> The original code prevented races by taking task_rq_lock().
> And task_call_func() is slightly more relaxed but it looks safe enough:
>
> + it still takes rq lock when the task is in runnable state.
> + it always takes p->pi_lock that prevents moving the task
> into runnable state by try_to_wake_up().
Correct, the new task_call_func() is trying hard to not take rq->lock,
but should be effectively identical to task_rq_lock().
> With the added (task != current) check:
Done
> Reviewed-by: Petr Mladek <pmladek@...e.com>
> Tested-by: Petr Mladek <pmladek@...e.com>
Thanks!
Powered by blists - more mailing lists