[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191005203945.6b3845a9@cakuba.netronome.com>
Date: Sat, 5 Oct 2019 20:39:45 -0700
From: Jakub Kicinski <jakub.kicinski@...ronome.com>
To: Toke Høiland-Jørgensen <toke@...hat.com>
Cc: Daniel Borkmann <daniel@...earbox.net>,
Alexei Starovoitov <ast@...nel.org>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
Marek Majkowski <marek@...udflare.com>,
Lorenz Bauer <lmb@...udflare.com>,
Alan Maguire <alan.maguire@...cle.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
bpf@...r.kernel.org
Subject: Re: [PATCH bpf-next v2 1/5] bpf: Support injecting chain calls into
BPF programs on load
On Sat, 05 Oct 2019 12:29:14 +0200, Toke Høiland-Jørgensen wrote:
> >> +static int bpf_inject_chain_calls(struct bpf_verifier_env *env)
> >> +{
> >> + struct bpf_prog *prog = env->prog;
> >> + struct bpf_insn *insn = prog->insnsi;
> >> + int i, cnt, delta = 0, ret = -ENOMEM;
> >> + const int insn_cnt = prog->len;
> >> + struct bpf_array *prog_array;
> >> + struct bpf_prog *new_prog;
> >> + size_t array_size;
> >> +
> >> + struct bpf_insn call_next[] = {
> >> + BPF_LD_IMM64(BPF_REG_2, 0),
> >> + /* Save real return value for later */
> >> + BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
> >> + /* First try tail call with index ret+1 */
> >> + BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
> >
> > Don't we need to check against the max here, and spectre-proofing
> > here?
>
> No, I don't think so. This is just setting up the arguments for the
> BPF_TAIL_CALL instruction below. The JIT will do its thing with that and
> emit the range check and the retpoline stuff...
Sorry, wrong CPU bug, I meant Meltdown :)
https://elixir.bootlin.com/linux/v5.4-rc1/source/kernel/bpf/verifier.c#L9029
> >> + BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 1),
> >> + BPF_RAW_INSN(BPF_JMP | BPF_TAIL_CALL, 0, 0, 0, 0),
> >> + /* If that doesn't work, try with index 0 (wildcard) */
> >> + BPF_MOV64_IMM(BPF_REG_3, 0),
> >> + BPF_RAW_INSN(BPF_JMP | BPF_TAIL_CALL, 0, 0, 0, 0),
> >> + /* Restore saved return value and exit */
> >> + BPF_MOV64_REG(BPF_REG_0, BPF_REG_6),
> >> + BPF_EXIT_INSN()
> >> + };
Powered by blists - more mailing lists