[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ya+YgJta0JYBvxrB@krava>
Date: Tue, 7 Dec 2021 18:23:12 +0100
From: Jiri Olsa <jolsa@...hat.com>
To: Andrii Nakryiko <andriin@...com>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, netdev@...r.kernel.org,
bpf@...r.kernel.org, Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...omium.org>
Subject: Re: [PATCH bpf-next 2/3] bpf: Add get_func_[arg|ret|arg_cnt] helpers
On Mon, Dec 06, 2021 at 01:54:53PM -0800, Andrii Nakryiko wrote:
>
> On 12/4/21 6:06 AM, Jiri Olsa wrote:
> > Adding following helpers for tracing programs:
> >
> > Get n-th argument of the traced function:
> > long bpf_get_func_arg(void *ctx, u32 n, u64 *value)
> >
> > Get return value of the traced function:
> > long bpf_get_func_ret(void *ctx, u64 *value)
> >
> > Get arguments count of the traced funtion:
> > long bpf_get_func_arg_cnt(void *ctx)
> >
> > The trampoline now stores number of arguments on ctx-8
> > address, so it's easy to verify argument index and find
> > return value argument's position.
> >
> > Moving function ip address on the trampoline stack behind
> > the number of functions arguments, so it's now stored on
> > ctx-16 address if it's needed.
> >
> > All helpers above are inlined by verifier.
> >
> > Signed-off-by: Jiri Olsa <jolsa@...nel.org>
> > ---
>
>
> Please cc me at andrii@...nel.org email for future emails, you'll save a lot
> of trouble with replying to your emails :) Thanks!
ugh, updated
SNIP
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index c26871263f1f..d5a3791071d6 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -4983,6 +4983,31 @@ union bpf_attr {
> > * Return
> > * The number of loops performed, **-EINVAL** for invalid **flags**,
> > * **-E2BIG** if **nr_loops** exceeds the maximum number of loops.
> > + *
> > + * long bpf_get_func_arg(void *ctx, u32 n, u64 *value)
> > + * Description
> > + * Get **n**-th argument (zero based) of the traced function (for tracing programs)
> > + * returned in **value**.
> > + *
> > + * Return
> > + * 0 on success.
> > + * **-EINVAL** if n >= arguments count of traced function.
> > + *
> > + * long bpf_get_func_ret(void *ctx, u64 *value)
> > + * Description
> > + * Get return value of the traced function (for tracing programs)
> > + * in **value**.
> > + *
> > + * Return
> > + * 0 on success.
> > + * **-EINVAL** for tracing programs other than BPF_TRACE_FEXIT or BPF_MODIFY_RETURN.
>
>
> -EOPNOTSUPP maybe?
ok
>
>
> > + *
> > + * long bpf_get_func_arg_cnt(void *ctx)
> > + * Description
> > + * Get number of arguments of the traced function (for tracing programs).
> > + *
> > + * Return
> > + * The number of arguments of the traced function.
> > */
> > #define __BPF_FUNC_MAPPER(FN) \
> > FN(unspec), \
> > @@ -5167,6 +5192,9 @@ union bpf_attr {
> > FN(kallsyms_lookup_name), \
> > FN(find_vma), \
> > FN(loop), \
> > + FN(get_func_arg), \
> > + FN(get_func_ret), \
> > + FN(get_func_arg_cnt), \
> > /* */
> > /* integer value in 'imm' field of BPF_CALL instruction selects which helper
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 6522ffdea487..cf6853d3a8e9 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -12974,6 +12974,7 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env,
> > static int do_misc_fixups(struct bpf_verifier_env *env)
> > {
> > struct bpf_prog *prog = env->prog;
> > + enum bpf_attach_type eatype = prog->expected_attach_type;
> > bool expect_blinding = bpf_jit_blinding_enabled(prog);
> > enum bpf_prog_type prog_type = resolve_prog_type(prog);
> > struct bpf_insn *insn = prog->insnsi;
> > @@ -13344,11 +13345,79 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> > continue;
> > }
> > + /* Implement bpf_get_func_arg inline. */
> > + if (prog_type == BPF_PROG_TYPE_TRACING &&
> > + insn->imm == BPF_FUNC_get_func_arg) {
> > + /* Load nr_args from ctx - 8 */
> > + insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
> > + insn_buf[1] = BPF_JMP32_REG(BPF_JGE, BPF_REG_2, BPF_REG_0, 6);
> > + insn_buf[2] = BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 3);
> > + insn_buf[3] = BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1);
> > + insn_buf[4] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0);
> > + insn_buf[5] = BPF_STX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0);
> > + insn_buf[6] = BPF_MOV64_IMM(BPF_REG_0, 0);
> > + insn_buf[7] = BPF_JMP_A(1);
> > + insn_buf[8] = BPF_MOV64_IMM(BPF_REG_0, -EINVAL);
> > + cnt = 9;
> > +
> > + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
> > + if (!new_prog)
> > + return -ENOMEM;
> > +
> > + delta += cnt - 1;
> > + env->prog = prog = new_prog;
> > + insn = new_prog->insnsi + i + delta;
> > + continue;
> > + }
> > +
> > + /* Implement bpf_get_func_ret inline. */
> > + if (prog_type == BPF_PROG_TYPE_TRACING &&
> > + insn->imm == BPF_FUNC_get_func_ret) {
> > + if (eatype == BPF_TRACE_FEXIT ||
> > + eatype == BPF_MODIFY_RETURN) {
> > + /* Load nr_args from ctx - 8 */
> > + insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
> > + insn_buf[1] = BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 3);
> > + insn_buf[2] = BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1);
> > + insn_buf[3] = BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0);
> > + insn_buf[4] = BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, 0);
> > + insn_buf[5] = BPF_MOV64_IMM(BPF_REG_0, 0);
> > + cnt = 6;
> > + } else {
> > + insn_buf[0] = BPF_MOV64_IMM(BPF_REG_0, -EINVAL);
> > + cnt = 1;
> > + }
> > +
> > + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
> > + if (!new_prog)
> > + return -ENOMEM;
> > +
> > + delta += cnt - 1;
> > + env->prog = prog = new_prog;
> > + insn = new_prog->insnsi + i + delta;
> > + continue;
> > + }
> > +
> > + /* Implement get_func_arg_cnt inline. */
> > + if (prog_type == BPF_PROG_TYPE_TRACING &&
> > + insn->imm == BPF_FUNC_get_func_arg_cnt) {
> > + /* Load nr_args from ctx - 8 */
> > + insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
> > +
> > + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 1);
> > + if (!new_prog)
> > + return -ENOMEM;
> > +
> > + env->prog = prog = new_prog;
> > + insn = new_prog->insnsi + i + delta;
> > + continue;
> > + }
>
>
> To be entirely honest, I'm not even sure we need to inline them. In programs
> that care about performance they will be called at most once. In others it
> doesn't matter. But even if they weren't, is the function call really such a
> big overhead for tracing cases? I don't mind it either, I just can hardly
> follow it.
maybe just inline get_func_arg_cnt, because it's just one instruction,
the other 2 I don't skipping the inline
jirka
Powered by blists - more mailing lists