lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 27 Apr 2023 09:26:28 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Yafang Shao <laoar.shao@...il.com>
Cc:     ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
        kafai@...com, songliubraving@...com, yhs@...com,
        john.fastabend@...il.com, kpsingh@...nel.org, sdf@...gle.com,
        haoluo@...gle.com, jolsa@...nel.org, mhiramat@...nel.org,
        bpf@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH bpf-next 5/6] bpf: Improve tracing recursion prevention
 mechanism

On Mon, 17 Apr 2023 15:47:36 +0000
Yafang Shao <laoar.shao@...il.com> wrote:

> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> index f61d513..3df39a5 100644
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
> @@ -842,15 +842,21 @@ static __always_inline u64 notrace bpf_prog_start_time(void)
>  static u64 notrace __bpf_prog_enter_recur(struct bpf_prog *prog, struct bpf_tramp_run_ctx *run_ctx)
>  	__acquires(RCU)

Because __bpf_prog_enter_recur() and __bpf_prog_exit_recur() can
legitimately nest (as you pointed out later in the thread), I think my
original plan is the way to go.



>  {
> -	rcu_read_lock();
> -	migrate_disable();
> -
> -	run_ctx->saved_run_ctx = bpf_set_run_ctx(&run_ctx->run_ctx);
> +	int bit;
>  
> -	if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) {
> +	rcu_read_lock();
> +	bit = test_recursion_try_acquire(_THIS_IP_, _RET_IP_);
> +	run_ctx->recursion_bit = bit;
> +	if (bit < 0) {
> +		preempt_disable_notrace();
>  		bpf_prog_inc_misses_counter(prog);
> +		preempt_enable_notrace();
>  		return 0;
>  	}
> +
> +	migrate_disable();

Just encompass the migrate_disable/enable() with the recursion protection.

That is, here add:

	test_recursion_release(recursion_bit);

No need to save it in the run_ctx, as you can use a local variable.

As I mentioned, if it passes when checking migrate_disable() it will also
pass when checking around migrate_enable() so the two will still be paired
properly, even if only the migrate_enable() starts recursing.


  bit = test_recursion_try_acquire() // OK
  if (bit < 0)
	return;
  migrate_disable();
  test_recursion_release(bit);

  [..]

  bit = test_recursion_try_acquire() // OK
  migrate_enable() // traced and recurses...

    bit = test_recursion_try_acquire() // fails
    if (bit < 0)
          return; // returns here
    migrate_disable() // does not get called.

The recursion around migrate_disable/enable() is needed because it's done
before other checks. You can't attach the test_recursion logic to the
__bpf_prog_enter/exit() routines, because those can legitimately recurse.

-- Steve


> +
> +	run_ctx->saved_run_ctx = bpf_set_run_ctx(&run_ctx->run_ctx);
>  	return bpf_prog_start_time();
>  }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ