lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 12 Mar 2022 00:54:12 +0100
From:   Daniel Borkmann <daniel@...earbox.net>
To:     Hou Tao <houtao1@...wei.com>, Alexei Starovoitov <ast@...nel.org>
Cc:     Martin KaFai Lau <kafai@...com>, Song Liu <songliubraving@...com>,
        John Fastabend <john.fastabend@...il.com>,
        Yonghong Song <yhs@...com>,
        Andrii Nakryiko <andrii@...nel.org>,
        "David S . Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        KP Singh <kpsingh@...nel.org>, netdev@...r.kernel.org,
        bpf@...r.kernel.org
Subject: Re: [PATCH bpf-next 2/4] bpf: Introduce bpf_int_jit_abort()

On 3/9/22 1:33 PM, Hou Tao wrote:
> It will be used to do cleanup for subprog which has been jited in first
> pass but extra pass has not been done. The scenario is possible when
> extra pass for subprog in the middle fails. The failure may lead to
> oops due to inconsistent status for pack allocator (e.g. ro_hdr->size
> and use_bpf_prog_pack) and memory leak in aux->jit_data.
> 
> For x86-64, bpf_int_jit_abort() will free allocated memories saved in
> aux->jit_data and fall back to interpreter mode to bypass the calling
> of bpf_jit_binary_pack_free() in bpf_jit_free().
> 
> Signed-off-by: Hou Tao <houtao1@...wei.com>
> ---
>   arch/x86/net/bpf_jit_comp.c | 24 ++++++++++++++++++++++++
>   include/linux/filter.h      |  1 +
>   kernel/bpf/core.c           |  9 +++++++++
>   kernel/bpf/verifier.c       |  3 +++
>   4 files changed, 37 insertions(+)
> 
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index ec3f00be2ac5..49bc0ddd55ae 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -2244,6 +2244,30 @@ struct x64_jit_data {
>   	struct jit_context ctx;
>   };
>   
> +void bpf_int_jit_abort(struct bpf_prog *prog)
> +{
> +	struct x64_jit_data *jit_data = prog->aux->jit_data;
> +	struct bpf_binary_header *header, *rw_header;
> +
> +	if (!jit_data)
> +		return;
> +
> +	prog->bpf_func = NULL;
> +	prog->jited = 0;
> +	prog->jited_len = 0;
> +
> +	header = jit_data->header;
> +	rw_header = jit_data->rw_header;
> +	bpf_arch_text_copy(&header->size, &rw_header->size,
> +			   sizeof(rw_header->size));
> +	bpf_jit_binary_pack_free(header, rw_header);
> +
> +	kvfree(jit_data->addrs);
> +	kfree(jit_data);
> +
> +	prog->aux->jit_data = NULL;
> +}
> +
>   #define MAX_PASSES 20
>   #define PADDING_PASSES (MAX_PASSES - 5)
>   
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index 9bf26307247f..f3a913229edd 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -945,6 +945,7 @@ u64 __bpf_call_base(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
>   	 (void *)__bpf_call_base)
>   
>   struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog);
> +void bpf_int_jit_abort(struct bpf_prog *prog);
>   void bpf_jit_compile(struct bpf_prog *prog);
>   bool bpf_jit_needs_zext(void);
>   bool bpf_jit_supports_kfunc_call(void);
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index ab630f773ec1..a1841e11524c 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -2636,6 +2636,15 @@ struct bpf_prog * __weak bpf_int_jit_compile(struct bpf_prog *prog)
>   	return prog;
>   }
>   
> +/*
> + * If arch JIT uses aux->jit_data to save temporary allocated status and
> + * supports subprog, it needs to override the function to free allocated
> + * memories and fall back to interpreter mode for passed prog.
> + */
> +void __weak bpf_int_jit_abort(struct bpf_prog *prog)
> +{
> +}
> +
>   /* Stub for JITs that support eBPF. All cBPF code gets transformed into
>    * eBPF by the kernel and is later compiled by bpf_int_jit_compile().
>    */
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index e34264200e09..885e515cf83f 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -13086,6 +13086,9 @@ static int jit_subprogs(struct bpf_verifier_env *env)
>   		if (tmp != func[i] || func[i]->bpf_func != old_bpf_func) {
>   			verbose(env, "JIT doesn't support bpf-to-bpf calls\n");
>   			err = -ENOTSUPP;
> +			/* Abort extra pass for the remaining subprogs */
> +			while (++i < env->subprog_cnt)
> +				bpf_int_jit_abort(func[i]);

Don't quite follow this one. For example, if we'd fail in the second pass, the
goto out_addrs from jit would free and clear the prog->aux->jit_data. If we'd succeed
but different prog is returned, prog->aux->jit_data is released and later the goto
out_free in here would clear the jited prog via bpf_jit_free(). Which code path leaves
prog->aux->jit_data as non-NULL such that extra bpf_int_jit_abort() is needed?

>   			goto out_free;
>   		}
>   		cond_resched();
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ