lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Tue, 11 Jul 2023 13:07:35 -0700 (PDT)
From: Palmer Dabbelt <palmer@...belt.com>
To: daniel@...earbox.net
CC: bjorn@...nel.org, ast@...nel.org, andrii@...nel.org, bpf@...r.kernel.org,
  netdev@...r.kernel.org, Bjorn Topel <bjorn@...osinc.com>, martin.lau@...ux.dev, song@...nel.org,
  yhs@...com, john.fastabend@...il.com, kpsingh@...nel.org, sdf@...gle.com,
  haoluo@...gle.com, jolsa@...nel.org, pulehui@...wei.com, luke.r.nels@...il.com, xi.wang@...il.com,
  linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org, linux@...osinc.com
Subject:     Re: [PATCH bpf] riscv, bpf: Fix inconsistent JIT image generation

On Tue, 11 Jul 2023 13:03:47 PDT (-0700), daniel@...earbox.net wrote:
> On 7/11/23 7:50 PM, Palmer Dabbelt wrote:
>> On Mon, 10 Jul 2023 00:41:31 PDT (-0700), bjorn@...nel.org wrote:
>>> From: Björn Töpel <bjorn@...osinc.com>
>>>
>>> In order to generate the prologue and epilogue, the BPF JIT needs to
>>> know which registers that are clobbered. Therefore, the during
>>> pre-final passes, the prologue is generated after the body of the
>>> program body-prologue-epilogue. Then, in the final pass, a proper
>>> prologue-body-epilogue JITted image is generated.
>>>
>>> This scheme has worked most of the time. However, for some large
>>> programs with many jumps, e.g. the test_kmod.sh BPF selftest with
>>> hardening enabled (blinding constants), this has shown to be
>>> incorrect. For the final pass, when the proper prologue-body-epilogue
>>> is generated, the image has not converged. This will lead to that the
>>> final image will have incorrect jump offsets. The following is an
>>> excerpt from an incorrect image:
>>>
>>>    | ...
>>>    |     3b8:       00c50663                beq     a0,a2,3c4 <.text+0x3c4>
>>>    |     3bc:       0020e317                auipc   t1,0x20e
>>>    |     3c0:       49630067                jalr    zero,1174(t1) # 20e852 <.text+0x20e852>
>>>    | ...
>>>    |  20e84c:       8796                    c.mv    a5,t0
>>>    |  20e84e:       6422                    c.ldsp  s0,8(sp)    # Epilogue start
>>>    |  20e850:       6141                    c.addi16sp      sp,16
>>>    |  20e852:       853e                    c.mv    a0,a5       # Incorrect jump target
>>>    |  20e854:       8082                    c.jr    ra
>>>
>>> The image has shrunk, and the epilogue offset is incorrect in the
>>> final pass.
>>>
>>> Correct the problem by always generating proper prologue-body-epilogue
>>> outputs, which means that the first pass will only generate the body
>>> to track what registers that are touched.
>>>
>>> Fixes: 2353ecc6f91f ("bpf, riscv: add BPF JIT for RV64G")
>>> Signed-off-by: Björn Töpel <bjorn@...osinc.com>
>>> ---
>>>   arch/riscv/net/bpf_jit.h      |  6 +++---
>>>   arch/riscv/net/bpf_jit_core.c | 19 +++++++++++++------
>>>   2 files changed, 16 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/arch/riscv/net/bpf_jit.h b/arch/riscv/net/bpf_jit.h
>>> index bf9802a63061..2717f5490428 100644
>>> --- a/arch/riscv/net/bpf_jit.h
>>> +++ b/arch/riscv/net/bpf_jit.h
>>> @@ -69,7 +69,7 @@ struct rv_jit_context {
>>>   	struct bpf_prog *prog;
>>>   	u16 *insns;		/* RV insns */
>>>   	int ninsns;
>>> -	int body_len;
>>> +	int prologue_len;
>>>   	int epilogue_offset;
>>>   	int *offset;		/* BPF to RV */
>>>   	int nexentries;
>>> @@ -216,8 +216,8 @@ static inline int rv_offset(int insn, int off, struct rv_jit_context *ctx)
>>>   	int from, to;
>>>
>>>   	off++; /* BPF branch is from PC+1, RV is from PC */
>>> -	from = (insn > 0) ? ctx->offset[insn - 1] : 0;
>>> -	to = (insn + off > 0) ? ctx->offset[insn + off - 1] : 0;
>>> +	from = (insn > 0) ? ctx->offset[insn - 1] : ctx->prologue_len;
>>> +	to = (insn + off > 0) ? ctx->offset[insn + off - 1] : ctx->prologue_len;
>>>   	return ninsns_rvoff(to - from);
>>>   }
>>>
>>> diff --git a/arch/riscv/net/bpf_jit_core.c b/arch/riscv/net/bpf_jit_core.c
>>> index 737baf8715da..7a26a3e1c73c 100644
>>> --- a/arch/riscv/net/bpf_jit_core.c
>>> +++ b/arch/riscv/net/bpf_jit_core.c
>>> @@ -44,7 +44,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>>>   	unsigned int prog_size = 0, extable_size = 0;
>>>   	bool tmp_blinded = false, extra_pass = false;
>>>   	struct bpf_prog *tmp, *orig_prog = prog;
>>> -	int pass = 0, prev_ninsns = 0, prologue_len, i;
>>> +	int pass = 0, prev_ninsns = 0, i;
>>>   	struct rv_jit_data *jit_data;
>>>   	struct rv_jit_context *ctx;
>>>
>>> @@ -83,6 +83,12 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>>>   		prog = orig_prog;
>>>   		goto out_offset;
>>>   	}
>>> +
>>> +	if (build_body(ctx, extra_pass, NULL)) {
>>> +		prog = orig_prog;
>>> +		goto out_offset;
>>> +	}
>>> +
>>>   	for (i = 0; i < prog->len; i++) {
>>>   		prev_ninsns += 32;
>>>   		ctx->offset[i] = prev_ninsns;
>>> @@ -91,12 +97,15 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>>>   	for (i = 0; i < NR_JIT_ITERATIONS; i++) {
>>>   		pass++;
>>>   		ctx->ninsns = 0;
>>> +
>>> +		bpf_jit_build_prologue(ctx);
>>> +		ctx->prologue_len = ctx->ninsns;
>>> +
>>>   		if (build_body(ctx, extra_pass, ctx->offset)) {
>>>   			prog = orig_prog;
>>>   			goto out_offset;
>>>   		}
>>> -		ctx->body_len = ctx->ninsns;
>>> -		bpf_jit_build_prologue(ctx);
>>> +
>>>   		ctx->epilogue_offset = ctx->ninsns;
>>>   		bpf_jit_build_epilogue(ctx);
>>>
>>> @@ -162,10 +171,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>>>
>>>   	if (!prog->is_func || extra_pass) {
>>>   		bpf_jit_binary_lock_ro(jit_data->header);
>>> -		prologue_len = ctx->epilogue_offset - ctx->body_len;
>>>   		for (i = 0; i < prog->len; i++)
>>> -			ctx->offset[i] = ninsns_rvoff(prologue_len +
>>> -						      ctx->offset[i]);
>>> +			ctx->offset[i] = ninsns_rvoff(ctx->offset[i]);
>>>   		bpf_prog_fill_jited_linfo(prog, ctx->offset);
>>>   out_offset:
>>>   		kfree(ctx->offset);
>>>
>>> base-commit: 496720b7cfb6574a8f6f4d434f23e3d1e6cfaeb9
>>
>> Acked-by: Palmer Dabbelt <palmer@...osinc.com>
>> Reviewed-by: Palmer Dabbelt <palmer@...osinc.com>
>>
>> I'm assuming this is aimed at the BPF tree, but LMK if you guys want me
>> to pick it up -- I've already got something for this week, so it's easy
>> on my end.  I'm dropping it from my queue and patchwork for now, though.
>
> Sounds good, we applied it to bpf already.

Thanks!

>
> Thanks,
> Daniel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ