[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200327031256.vhk2luomxgex3ui4@ast-mbp>
Date: Thu, 26 Mar 2020 20:12:56 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: KP Singh <kpsingh@...omium.org>
Cc: linux-kernel@...r.kernel.org, bpf@...r.kernel.org,
linux-security-module@...r.kernel.org,
Brendan Jackman <jackmanb@...gle.com>,
Florent Revest <revest@...gle.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
James Morris <jmorris@...ei.org>,
Kees Cook <keescook@...omium.org>,
Paul Turner <pjt@...gle.com>, Jann Horn <jannh@...gle.com>,
Florent Revest <revest@...omium.org>,
Brendan Jackman <jackmanb@...omium.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH bpf-next v7 4/8] bpf: lsm: Implement attach, detach and
execution
On Thu, Mar 26, 2020 at 03:28:19PM +0100, KP Singh wrote:
>
> if (arg == nr_args) {
> - if (prog->expected_attach_type == BPF_TRACE_FEXIT) {
> + /* BPF_LSM_MAC programs only have int and void functions they
> + * can be attached to. When they are attached to a void function
> + * they result in the creation of an FEXIT trampoline and when
> + * to a function that returns an int, a MODIFY_RETURN
> + * trampoline.
> + */
> + if (prog->expected_attach_type == BPF_TRACE_FEXIT ||
> + prog->expected_attach_type == BPF_LSM_MAC) {
> if (!t)
> return true;
> t = btf_type_by_id(btf, t->type);
Could you add a comment here that though BPF_MODIFY_RETURN-like check
if (ret_type != 'int') return -EINVAL;
is _not_ done here. It is still safe, since LSM hooks have only
void and int return types.
> + case BPF_LSM_MAC:
> + if (!prog->aux->attach_func_proto->type)
> + /* The function returns void, we cannot modify its
> + * return value.
> + */
> + return BPF_TRAMP_FEXIT;
> + else
> + return BPF_TRAMP_MODIFY_RETURN;
I was thinking whether it would help performance significantly enough
if we add a flavor of BPF_TRAMP_FEXIT that doesn't have
BPF_TRAMP_F_CALL_ORIG.
That will save the cost of nop call, but I guess indirect call due
to lsm infra is slow enough, so this extra few cycles won't be noticeable.
So I'm fine with it as-is. When lsm hooks will get rid of indirect call
we can optimize it further.
Powered by blists - more mailing lists