lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADxym3YBFXKKHF-KJENqcPorT=SwZO3JJCnO_UwWxU4wo8qEzA@mail.gmail.com>
Date: Mon, 5 Jun 2023 10:40:17 +0800
From: Menglong Dong <menglong8.dong@...il.com>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Jiri Olsa <olsajiri@...il.com>, "David S. Miller" <davem@...emloft.net>, 
	David Ahern <dsahern@...nel.org>, Alexei Starovoitov <ast@...nel.org>, 
	Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>, 
	Martin KaFai Lau <martin.lau@...ux.dev>, Song Liu <song@...nel.org>, Yonghong Song <yhs@...com>, 
	KP Singh <kpsingh@...nel.org>, Jiri Olsa <jolsa@...nel.org>, X86 ML <x86@...nel.org>, 
	benbjiang@...cent.com, Network Development <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>, 
	LKML <linux-kernel@...r.kernel.org>, 
	"open list:KERNEL SELFTEST FRAMEWORK" <linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH bpf-next v2 2/5] bpf, x86: allow function arguments up to
 14 for TRACING

On Sat, Jun 3, 2023 at 2:31 AM Alexei Starovoitov
<alexei.starovoitov@...il.com> wrote:
>
> On Fri, Jun 2, 2023 at 12:01 AM <menglong8.dong@...il.com> wrote:
> >
> > From: Menglong Dong <imagedong@...cent.com>
>
> Please trim your cc when you respin. It's unnecessary huge.

Sorry for bothering the unrelated people. The cc is generated
from ./scripts/get_maintainer.pl, and I'll keep it less than 15.

>
[...]
> > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> > index 1056bbf55b17..0e247bb7d6f6 100644
> > --- a/arch/x86/net/bpf_jit_comp.c
> > +++ b/arch/x86/net/bpf_jit_comp.c
> > @@ -1868,7 +1868,7 @@ static void save_regs(const struct btf_func_model *m, u8 **prog, int nr_regs,
> >          * mov QWORD PTR [rbp-0x10],rdi
> >          * mov QWORD PTR [rbp-0x8],rsi
> >          */
> > -       for (i = 0, j = 0; i < min(nr_regs, 6); i++) {
> > +       for (i = 0, j = 0; i < min(nr_regs, MAX_BPF_FUNC_ARGS); i++) {
> >                 /* The arg_size is at most 16 bytes, enforced by the verifier. */
> >                 arg_size = m->arg_size[j];
> >                 if (arg_size > 8) {
> > @@ -1876,10 +1876,22 @@ static void save_regs(const struct btf_func_model *m, u8 **prog, int nr_regs,
> >                         next_same_struct = !next_same_struct;
> >                 }
> >
> > -               emit_stx(prog, bytes_to_bpf_size(arg_size),
> > -                        BPF_REG_FP,
> > -                        i == 5 ? X86_REG_R9 : BPF_REG_1 + i,
> > -                        -(stack_size - i * 8));
> > +               if (i <= 5) {
> > +                       /* store function arguments in regs */
>
> The comment is confusing.
> It's not storing arguments in regs.
> It copies them from regs into stack.

Right, I'll use "copy arguments from regs into stack"
instead.

>
> > +                       emit_stx(prog, bytes_to_bpf_size(arg_size),
> > +                                BPF_REG_FP,
> > +                                i == 5 ? X86_REG_R9 : BPF_REG_1 + i,
> > +                                -(stack_size - i * 8));
> > +               } else {
> > +                       /* store function arguments in stack */
> > +                       emit_ldx(prog, bytes_to_bpf_size(arg_size),
> > +                                BPF_REG_0, BPF_REG_FP,
> > +                                (i - 6) * 8 + 0x18);
> > +                       emit_stx(prog, bytes_to_bpf_size(arg_size),
>
> and we will have garbage values in upper bytes.
> Probably should fix both here and in regular copy from reg.
>

I noticed it too......I'll dig it deeper to find a solution.

> > +                                BPF_REG_FP,
> > +                                BPF_REG_0,
> > +                                -(stack_size - i * 8));
> > +               }
> >
[......]
> >         /* Generated trampoline stack layout:
> > @@ -2170,7 +2219,14 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> >          *
> >          * RBP - ip_off    [ traced function ]  BPF_TRAMP_F_IP_ARG flag
> >          *
> > +        * RBP - rbx_off   [ rbx value       ]  always
> > +        *
>
> That is the case already and we just didn't document it, right?
>

I'm afraid not anymore. In the origin logic, we use
"push rbx" after "sub rsp, stack_size". This will store
"rbx" into the top of the stack.

However, now we need to make sure the arguments,
which we copy from the stack frame of the caller into
current stack frame in prepare_origin_stack(), stay in
the top of the stack, to pass these arguments to the
orig_call through stack.

> >          * RBP - run_ctx_off [ bpf_tramp_run_ctx ]
> > +        *
> > +        *                     [ stack_argN ]  BPF_TRAMP_F_CALL_ORIG
> > +        *                     [ ...        ]
> > +        *                     [ stack_arg2 ]
> > +        * RBP - arg_stack_off [ stack_arg1 ]
> >          */
> >
> >         /* room for return value of orig_call or fentry prog */
> > @@ -2190,9 +2246,17 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> >
> >         ip_off = stack_size;
> >
> > +       stack_size += 8;
> > +       rbx_off = stack_size;
> > +
> >         stack_size += (sizeof(struct bpf_tramp_run_ctx) + 7) & ~0x7;
> >         run_ctx_off = stack_size;
> >
> > +       if (nr_regs > 6 && (flags & BPF_TRAMP_F_CALL_ORIG))
> > +               stack_size += (nr_regs - 6) * 8;
> > +
> > +       arg_stack_off = stack_size;
> > +
> >         if (flags & BPF_TRAMP_F_SKIP_FRAME) {
> >                 /* skip patched call instruction and point orig_call to actual
> >                  * body of the kernel function.
> > @@ -2212,8 +2276,9 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> >         x86_call_depth_emit_accounting(&prog, NULL);
> >         EMIT1(0x55);             /* push rbp */
> >         EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
> > -       EMIT4(0x48, 0x83, 0xEC, stack_size); /* sub rsp, stack_size */
> > -       EMIT1(0x53);             /* push rbx */
> > +       EMIT3_off32(0x48, 0x81, 0xEC, stack_size); /* sub rsp, stack_size */
> > +       /* mov QWORD PTR [rbp - rbx_off], rbx */
> > +       emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_6, -rbx_off);
> >
> >         /* Store number of argument registers of the traced function:
> >          *   mov rax, nr_regs
> > @@ -2262,6 +2327,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> >
> >         if (flags & BPF_TRAMP_F_CALL_ORIG) {
> >                 restore_regs(m, &prog, nr_regs, regs_off);
> > +               prepare_origin_stack(m, &prog, nr_regs, arg_stack_off);
> >
> >                 if (flags & BPF_TRAMP_F_ORIG_STACK) {
> >                         emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
> > @@ -2321,14 +2387,14 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> >         if (save_ret)
> >                 emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
> >
> > -       EMIT1(0x5B); /* pop rbx */
> > +       emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, -rbx_off);
>
> It can stay as 'pop', no?
>

As now we don't use "push rbx" anymore,
we can't use "pop" here either, as we store rbx in
the stack of a specific location.

Thanks!
Menglong Dong

> >         EMIT1(0xC9); /* leave */
> >         if (flags & BPF_TRAMP_F_SKIP_FRAME)
> >                 /* skip our return address and return to parent */
> >                 EMIT4(0x48, 0x83, 0xC4, 8); /* add rsp, 8 */
> >         emit_return(&prog, prog);
> >         /* Make sure the trampoline generation logic doesn't overflow */
> > -       if (WARN_ON_ONCE(prog > (u8 *)image_end - BPF_INSN_SAFETY)) {
> > +       if (prog > (u8 *)image_end - BPF_INSN_SAFETY) {
> >                 ret = -EFAULT;
> >                 goto cleanup;
> >         }
> > --
> > 2.40.1
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ