[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.21.2505131529080.19621@pobox.suse.cz>
Date: Tue, 13 May 2025 15:34:50 +0200 (CEST)
From: Miroslav Benes <mbenes@...e.cz>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
cc: linux-kernel@...r.kernel.org, live-patching@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Josh Poimboeuf <jpoimboe@...hat.com>, mingo@...nel.com,
juri.lelli@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, vschneid@...hat.com, jpoimboe@...nel.org,
jikos@...nel.org, pmladek@...e.com, joe.lawrence@...hat.com,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v2] sched,livepatch: Untangle cond_resched() and
live-patching
Hi,
thanks for the updated version.
On Fri, 9 May 2025, Sebastian Andrzej Siewior wrote:
> From: Peter Zijlstra <peterz@...radead.org>
>
> With the goal of deprecating / removing VOLUNTARY preempt, live-patch
> needs to stop relying on cond_resched() to make forward progress.
>
> Instead, rely on schedule() with TASK_FREEZABLE set. Just like
> live-patching, the freezer needs to be able to stop tasks in a safe /
> known state.
>
> Compile tested only.
livepatch selftests pass and I also ran some more.
> [bigeasy: use likely() in __klp_sched_try_switch() and update comments]
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Acked-by: Miroslav Benes <mbenes@...e.cz>
A nit below if there is an another version, otherwise Petr might fix it
when merging.
> @@ -365,27 +356,20 @@ static bool klp_try_switch_task(struct task_struct *task)
>
> void __klp_sched_try_switch(void)
> {
> + /*
> + * This function is called from __schedule() while a context switch is
> + * about to happen. Preemption is already disabled and klp_mutex
> + * can't be acquired.
> + * Disabled preemption is used to prevent racing with other callers of
> + * klp_try_switch_task(). Thanks to task_call_func() they won't be
> + * able to switch to this task while it's running.
> + */
> + lockdep_assert_preemption_disabled();
> +
> + /* Make sure current didn't get patched */
> if (likely(!klp_patch_pending(current)))
> return;
This last comment is not precise. If !klp_patch_pending(), there is
nothing to do. Fast way out. So if it was up to me, I would remove the
line all together.
Miroslav
Powered by blists - more mailing lists