[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-PNll7fJQzCDH35@pathway.suse.cz>
Date: Wed, 26 Mar 2025 10:49:10 +0100
From: Petr Mladek <pmladek@...e.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>, mingo@...nel.com,
juri.lelli@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, vschneid@...hat.com, jpoimboe@...nel.org,
jikos@...nel.org, mbenes@...e.cz, joe.lawrence@...hat.com,
linux-kernel@...r.kernel.org, live-patching@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: Re: [RFC][PATCH] sched,livepatch: Untangle cond_resched() and
live-patching
On Mon 2025-03-24 14:49:09, Peter Zijlstra wrote:
>
> With the goal of deprecating / removing VOLUNTARY preempt, live-patch
> needs to stop relying on cond_resched() to make forward progress.
>
> Instead, rely on schedule() with TASK_FREEZABLE set. Just like
> live-patching, the freezer needs to be able to stop tasks in a safe /
> known state.
> Compile tested only.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> include/linux/livepatch_sched.h | 15 +++++--------
> include/linux/sched.h | 6 -----
> kernel/livepatch/transition.c | 30 ++++++-------------------
> kernel/sched/core.c | 50 +++++++----------------------------------
> 4 files changed, 21 insertions(+), 80 deletions(-)
>
> diff --git a/include/linux/livepatch_sched.h b/include/linux/livepatch_sched.h
> index 013794fb5da0..7e8171226dd7 100644
> --- a/include/linux/livepatch_sched.h
> +++ b/include/linux/livepatch_sched.h
> @@ -3,27 +3,24 @@
> #define _LINUX_LIVEPATCH_SCHED_H_
>
> #include <linux/jump_label.h>
> -#include <linux/static_call_types.h>
> +#include <linux/sched.h>
> +
>
> #ifdef CONFIG_LIVEPATCH
>
> void __klp_sched_try_switch(void);
>
> -#if !defined(CONFIG_PREEMPT_DYNAMIC) || !defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
> -
> DECLARE_STATIC_KEY_FALSE(klp_sched_try_switch_key);
>
> -static __always_inline void klp_sched_try_switch(void)
> +static __always_inline void klp_sched_try_switch(struct task_struct *curr)
> {
> - if (static_branch_unlikely(&klp_sched_try_switch_key))
> + if (static_branch_unlikely(&klp_sched_try_switch_key) &&
> + READ_ONCE(curr->__state) & TASK_FREEZABLE)
> __klp_sched_try_switch();
> }
Do we really need to check the TASK_FREEZABLE state, please?
My understanding is that TASK_FREEZABLE is set when kernel kthreads go into
a "freezable" sleep, e.g. wait_event_freezable().
But __klp_sched_try_switch() should be safe when the task is not
running and the stack is reliable. IMHO, it should be safe anytime
it is being scheduled out.
Note that wait_event_freezable() is a good location. It is usually called in
the main loop of the kthread where the stack is small. So that the chance
that it is not running a livepatched function is higher than on
another random schedulable location.
But we actually wanted to have it in cond_resched() because
it might take a long time to reach the main loop, and sleep there.
Best Regards,
Petr
Powered by blists - more mailing lists