[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y/YgARbqsyvzebAl@alley>
Date: Wed, 22 Feb 2023 15:00:33 +0100
From: Petr Mladek <pmladek@...e.com>
To: Josh Poimboeuf <jpoimboe@...nel.org>
Cc: live-patching@...r.kernel.org, linux-kernel@...r.kernel.org,
Seth Forshee <sforshee@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Song Liu <song@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Joe Lawrence <joe.lawrence@...hat.com>,
Miroslav Benes <mbenes@...e.cz>,
Jiri Kosina <jikos@...nel.org>, Ingo Molnar <mingo@...hat.com>,
Rik van Riel <riel@...riel.com>
Subject: Re: [PATCH v2 2/3] livepatch,sched: Add livepatch task switching to
cond_resched()
On Fri 2023-02-17 14:22:55, Josh Poimboeuf wrote:
> There have been reports [1][2] of live patches failing to complete
> within a reasonable amount of time due to CPU-bound kthreads.
>
> Fix it by patching tasks in cond_resched().
>
> There are four different flavors of cond_resched(), depending on the
> kernel configuration. Hook into all of them.
>
> A more elegant solution might be to use a preempt notifier. However,
> non-ORC unwinders can't unwind a preempted task reliably.
>
> [1] https://lore.kernel.org/lkml/20220507174628.2086373-1-song@kernel.org/
> [2] https://lkml.kernel.org/lkml/20230120-vhost-klp-switching-v1-0-7c2b65519c43@kernel.org
>
> --- a/kernel/livepatch/transition.c
> +++ b/kernel/livepatch/transition.c
> @@ -588,14 +641,10 @@ void klp_reverse_transition(void)
> klp_target_state == KLP_PATCHED ? "patching to unpatching" :
> "unpatching to patching");
>
> - klp_transition_patch->enabled = !klp_transition_patch->enabled;
> -
> - klp_target_state = !klp_target_state;
> -
> /*
> * Clear all TIF_PATCH_PENDING flags to prevent races caused by
> - * klp_update_patch_state() running in parallel with
> - * klp_start_transition().
> + * klp_update_patch_state() or __klp_sched_try_switch() running in
> + * parallel with the reverse transition.
> */
> read_lock(&tasklist_lock);
> for_each_process_thread(g, task)
> @@ -605,9 +654,16 @@ void klp_reverse_transition(void)
> for_each_possible_cpu(cpu)
> clear_tsk_thread_flag(idle_task(cpu), TIF_PATCH_PENDING);
>
> - /* Let any remaining calls to klp_update_patch_state() complete */
> + /*
> + * Make sure all existing invocations of klp_update_patch_state() and
> + * __klp_sched_try_switch() see the cleared TIF_PATCH_PENDING before
> + * starting the reverse transition.
> + */
> klp_synchronize_transition();
>
> + /* All patching has stopped, now start the reverse transition. */
> + klp_transition_patch->enabled = !klp_transition_patch->enabled;
> + klp_target_state = !klp_target_state;
I have double checked the synchronization and we need here:
/*
* Make sure klp_update_patch_state() and __klp_sched_try_switch()
* see the updated klp_target_state before TIF_PATCH_PENDING
* is set again in klp_start_transition().
*/
smp_wmb();
The same is achieved by smp_wmb() in klp_init_transition().
Note that the extra barrier was missing here because klp_target_state
was set before klp_synchronize_transition(). It was fine because
klp_update_patch_state() was called on locations where a transition
in any direction was always safe.
Just for record. We need to modify @klp_target_state after
klp_synchronize_transition() now. The value is used by
__klp_sched_try_switch() to decide when the transition
is safe. It defines what functions must not be on the stack.
I am sorry that I missed this when reviewing v1. I think that I needed
to see the new code with a fresh head.
> klp_start_transition();
> }
I do not see any other problem. With the above barrier added,
feel free to use:
Reviewed-by: Petr Mladek <pmladek@...e.com>
It is for the livepatching part. I checked also the scheduler
code and it looked fine but I would not put my hand in the fire
for it.
Best Regards,
Petr
Powered by blists - more mailing lists