lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200903194504.yhx6wpz6wayxb6mg@ast-mbp.dhcp.thefacebook.com>
Date:   Thu, 3 Sep 2020 12:45:04 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc:     ast@...nel.org, daniel@...earbox.net, bpf@...r.kernel.org,
        netdev@...r.kernel.org, bjorn.topel@...el.com,
        magnus.karlsson@...el.com
Subject: Re: [PATCH v7 bpf-next 5/7] bpf: limit caller's stack depth 256 for
 subprogs with tailcalls

On Wed, Sep 02, 2020 at 10:08:13PM +0200, Maciej Fijalkowski wrote:
> Protect against potential stack overflow that might happen when bpf2bpf
> calls get combined with tailcalls. Limit the caller's stack depth for
> such case down to 256 so that the worst case scenario would result in 8k
> stack size (32 which is tailcall limit * 256 = 8k).
> 
> Suggested-by: Alexei Starovoitov <ast@...nel.org>
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> ---
>  include/linux/bpf_verifier.h |  1 +
>  kernel/bpf/verifier.c        | 28 ++++++++++++++++++++++++++++
>  2 files changed, 29 insertions(+)
> 
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index 53c7bd568c5d..5026b75db972 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -358,6 +358,7 @@ struct bpf_subprog_info {
>  	u32 start; /* insn idx of function entry point */
>  	u32 linfo_idx; /* The idx to the main_prog->aux->linfo */
>  	u16 stack_depth; /* max. stack depth used by this function */
> +	bool has_tail_call;
>  };
>  
>  /* single container for all structs
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 8f9e95f5f73f..b12527d87edb 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1490,6 +1490,8 @@ static int check_subprogs(struct bpf_verifier_env *env)
>  	for (i = 0; i < insn_cnt; i++) {
>  		u8 code = insn[i].code;
>  
> +		if (insn[i].imm == BPF_FUNC_tail_call)
> +			subprog[cur_subprog].has_tail_call = true;

It will randomly match on other opcodes.
This check probably should be moved few lines down after BPF_JMP && BPF_CALL &&
insn->src_reg != BPF_PSEUDO_CALL.

Another option would be to move it to check_helper_call(), since it
already matches on:
if (func_id == BPF_FUNC_tail_call) {
                err = check_reference_leak(env);
but adding find_subprog() there to mark seems less efficient than
doing it during check_subprogs().

>  		if (BPF_CLASS(code) != BPF_JMP && BPF_CLASS(code) != BPF_JMP32)
>  			goto next;
>  		if (BPF_OP(code) == BPF_EXIT || BPF_OP(code) == BPF_CALL)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ