lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230127165236.rjcp6jm6csdta6z3@treble>
Date:   Fri, 27 Jan 2023 08:52:36 -0800
From:   Josh Poimboeuf <jpoimboe@...nel.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Petr Mladek <pmladek@...e.com>,
        Joe Lawrence <joe.lawrence@...hat.com>, kvm@...r.kernel.org,
        "Michael S. Tsirkin" <mst@...hat.com>, netdev@...r.kernel.org,
        Jiri Kosina <jikos@...nel.org>, linux-kernel@...r.kernel.org,
        virtualization@...ts.linux-foundation.org,
        "Seth Forshee (DigitalOcean)" <sforshee@...italocean.com>,
        live-patching@...r.kernel.org, Miroslav Benes <mbenes@...e.cz>
Subject: Re: [PATCH 0/2] vhost: improve livepatch switching for heavily
 loaded vhost worker kthreads

On Fri, Jan 27, 2023 at 11:37:02AM +0100, Peter Zijlstra wrote:
> On Thu, Jan 26, 2023 at 08:43:55PM -0800, Josh Poimboeuf wrote:
> > Here's another idea, have we considered this?  Have livepatch set
> > TIF_NEED_RESCHED on all kthreads to force them into schedule(), and then
> > have the scheduler call klp_try_switch_task() if TIF_PATCH_PENDING is
> > set.
> > 
> > Not sure how scheduler folks would feel about that ;-)

Hmmmm, with preemption I guess the above doesn't work for kthreads
calling cond_resched() instead of what vhost_worker() does (explicit
need_resched/schedule).

> diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
> index f1b25ec581e0..06746095a724 100644
> --- a/kernel/livepatch/transition.c
> +++ b/kernel/livepatch/transition.c
> @@ -9,6 +9,7 @@
>  
>  #include <linux/cpu.h>
>  #include <linux/stacktrace.h>
> +#include <linux/stop_machine.h>
>  #include "core.h"
>  #include "patch.h"
>  #include "transition.h"
> @@ -334,6 +335,16 @@ static bool klp_try_switch_task(struct task_struct *task)
>  	return !ret;
>  }
>  
> +static int __stop_try_switch(void *arg)
> +{
> +	return klp_try_switch_task(arg) ? 0 : -EBUSY;
> +}
> +
> +static bool klp_try_switch_task_harder(struct task_struct *task)
> +{
> +	return !stop_one_cpu(task_cpu(task), __stop_try_switch, task);
> +}
> +
>  /*
>   * Sends a fake signal to all non-kthread tasks with TIF_PATCH_PENDING set.
>   * Kthreads with TIF_PATCH_PENDING set are woken up.

Doesn't work for PREEMPT+!ORC.  Non-ORC reliable unwinders will detect
preemption on the stack and automatically report unreliable.

-- 
Josh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ