[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200304151303.GC9984@chromium.org>
Date: Wed, 4 Mar 2020 16:13:03 +0100
From: KP Singh <kpsingh@...omium.org>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc: linux-security-module@...r.kernel.org,
open list <linux-kernel@...r.kernel.org>,
bpf <bpf@...r.kernel.org>, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Paul Turner <pjt@...gle.com>, Jann Horn <jannh@...gle.com>,
Florent Revest <revest@...omium.org>,
Brendan Jackman <jackmanb@...omium.org>
Subject: Re: [PATCH bpf-next v2 3/7] bpf: Introduce BPF_MODIFY_RETURN
On 03-Mär 21:08, Andrii Nakryiko wrote:
> On Tue, Mar 3, 2020 at 5:56 PM KP Singh <kpsingh@...omium.org> wrote:
> >
> > From: KP Singh <kpsingh@...gle.com>
> >
> > When multiple programs are attached, each program receives the return
> > value from the previous program on the stack and the last program
> > provides the return value to the attached function.
> >
> > The fmod_ret bpf programs are run after the fentry programs and before
> > the fexit programs. The original function is only called if all the
> > fmod_ret programs return 0 to avoid any unintended side-effects. The
> > success value, i.e. 0 is not currently configurable but can be made so
> > where user-space can specify it at load time.
> >
> > For example:
> >
> > int func_to_be_attached(int a, int b)
> > { <--- do_fentry
> >
> > do_fmod_ret:
> > <update ret by calling fmod_ret>
> > if (ret != 0)
> > goto do_fexit;
> >
> > original_function:
> >
> > <side_effects_happen_here>
> >
> > } <--- do_fexit
> >
> > The fmod_ret program attached to this function can be defined as:
> >
> > SEC("fmod_ret/func_to_be_attached")
> > int BPF_PROG(func_name, int a, int b, int ret)
> > {
> > // This will skip the original function logic.
> > return 1;
> > }
> >
> > The first fmod_ret program is passed 0 in its return argument.
> >
> > Signed-off-by: KP Singh <kpsingh@...gle.com>
> > ---
> > arch/x86/net/bpf_jit_comp.c | 130 ++++++++++++++++++++++++++++++---
> > include/linux/bpf.h | 1 +
> > include/uapi/linux/bpf.h | 1 +
> > kernel/bpf/btf.c | 3 +-
> > kernel/bpf/syscall.c | 1 +
> > kernel/bpf/trampoline.c | 5 +-
> > kernel/bpf/verifier.c | 1 +
> > tools/include/uapi/linux/bpf.h | 1 +
> > 8 files changed, 130 insertions(+), 13 deletions(-)
> >
>
> This looks good, but I'll Alexei check all the assembly generation
> logic, not too big of an expert on that.
>
> [...]
>
>
> > static int emit_fallback_jump(u8 **pprog)
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index 98ec10b23dbb..3cfdc216a2f4 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -473,6 +473,7 @@ void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start);
> >
> > enum bpf_tramp_prog_type {
> > BPF_TRAMP_FENTRY,
> > + BPF_TRAMP_MODIFY_RETURN,
>
> This is probably bad idea to re-number BPF_TRAMP_FEXIT for no good
> reason. E.g., if there are some drgn scripts that do some internal
> state printing, this is major inconvenience, while really providing no
> benefit in itself. Consider putting it right before BPF_TRAMP_MAX.
Makes sense, I somehow initially (incorrectly) assumed that the order
represented the order of execution. But the only real demarcation
is the BPF_TRAMP_MAX. Updated it for v3.
- KP
>
> > BPF_TRAMP_FEXIT,
> > BPF_TRAMP_MAX,
> > BPF_TRAMP_REPLACE, /* more than MAX */
>
> [...]
Powered by blists - more mailing lists