[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQJnhfbALtNkCauS_ZwRfybcb_mryEvZW7Uu1uOSshQ9Ew@mail.gmail.com>
Date: Wed, 11 Oct 2023 06:38:56 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Hao Sun <sunhao.th@...il.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Song Liu <song@...nel.org>,
Yonghong Song <yonghong.song@...ux.dev>,
KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...gle.com>,
Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
bpf <bpf@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH bpf-next v3 1/3] bpf: Detect jumping to reserved code
during check_cfg()
On Wed, Oct 11, 2023 at 2:01 AM Hao Sun <sunhao.th@...il.com> wrote:
>
> Currently, we don't check if the branch-taken of a jump is reserved code of
> ld_imm64. Instead, such a issue is captured in check_ld_imm(). The verifier
> gives the following log in such case:
>
> func#0 @0
> 0: R1=ctx(off=0,imm=0) R10=fp0
> 0: (18) r4 = 0xffff888103436000 ; R4_w=map_ptr(off=0,ks=4,vs=128,imm=0)
> 2: (18) r1 = 0x1d ; R1_w=29
> 4: (55) if r4 != 0x0 goto pc+4 ; R4_w=map_ptr(off=0,ks=4,vs=128,imm=0)
> 5: (1c) w1 -= w1 ; R1_w=0
> 6: (18) r5 = 0x32 ; R5_w=50
> 8: (56) if w5 != 0xfffffff4 goto pc-2
> mark_precise: frame0: last_idx 8 first_idx 0 subseq_idx -1
> mark_precise: frame0: regs=r5 stack= before 6: (18) r5 = 0x32
> 7: R5_w=50
> 7: BUG_ld_00
> invalid BPF_LD_IMM insn
>
> Here the verifier rejects the program because it thinks insn at 7 is an
> invalid BPF_LD_IMM, but such a error log is not accurate since the issue
> is jumping to reserved code not because the program contains invalid insn.
> Therefore, make the verifier check the jump target during check_cfg(). For
> the same program, the verifier reports the following log:
>
> func#0 @0
> jump to reserved code from insn 8 to 7
>
> Signed-off-by: Hao Sun <sunhao.th@...il.com>
> ---
> kernel/bpf/verifier.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index eed7350e15f4..725ac0b464cf 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -14980,6 +14980,7 @@ static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
> {
> int *insn_stack = env->cfg.insn_stack;
> int *insn_state = env->cfg.insn_state;
> + struct bpf_insn *insns = env->prog->insnsi;
>
> if (e == FALLTHROUGH && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
> return DONE_EXPLORING;
> @@ -14993,6 +14994,12 @@ static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
> return -EINVAL;
> }
>
> + if (e == BRANCH && insns[w].code == 0) {
> + verbose_linfo(env, t, "%d", t);
> + verbose(env, "jump to reserved code from insn %d to %d\n", t, w);
> + return -EINVAL;
> + }
I don't think we should be changing the verifier to make
fuzzer logs more readable.
Same with patch 2. The code is fine as-is.
Powered by blists - more mailing lists