[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzYJanf4gCmeeNHZhjJeUwwOQOCteCP4Uoj3yRD698BJCg@mail.gmail.com>
Date: Mon, 2 Aug 2021 14:35:35 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Dave Marchevsky <davemarchevsky@...com>
Cc: bpf <bpf@...r.kernel.org>, Networking <netdev@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>
Subject: Re: [PATCH bpf-next 1/1] bpf: migrate cgroup_bpf to internal
cgroup_bpf_attach_type enum
On Sat, Jul 31, 2021 at 4:33 PM Dave Marchevsky <davemarchevsky@...com> wrote:
>
> Add an enum (cgroup_bpf_attach_type) containing only valid cgroup_bpf
> attach types and a function to map bpf_attach_type values to the new
> enum. Inspired by netns_bpf_attach_type.
>
> Then, migrate cgroup_bpf to use cgroup_bpf_attach_type wherever
> possible. Functionality is unchanged as attach_type_to_prog_type
> switches in bpf/syscall.c were preventing non-cgroup programs from
> making use of the invalid cgroup_bpf array slots.
>
> As a result struct cgroup_bpf uses 504 fewer bytes relative to when its
> arrays were sized using MAX_BPF_ATTACH_TYPE.
>
> bpf_cgroup_storage is notably not migrated as struct
> bpf_cgroup_storage_key is part of uapi and contains a bpf_attach_type
> member which is not meant to be opaque. Similarly, bpf_cgroup_link
> continues to report its bpf_attach_type member to userspace via fdinfo
> and bpf_link_info.
>
> To ease disambiguation, bpf_attach_type variables are renamed from
> 'type' to 'atype' when changed to cgroup_bpf_attach_type.
>
> Regarding testing: biggest concerns here are 1) attach/detach/run for
> programs which shouldn't map to a cgroup_bpf_attach_type should continue
> to not involve cgroup_bpf codepaths; and 2) attach types that should be
> mapped to a cgroup_bpf_attach_type do so correctly and run as expected.
>
> Existing selftests cover both scenarios well. The udp_limit selftest
> specifically validates the 2nd case - BPF_CGROUP_INET_SOCK_RELEASE is
> larger than MAX_CGROUP_BPF_ATTACH_TYPE so if it were not correctly
> mapped to CG_BPF_CGROUP_INET_SOCK_RELEASE the test would fail.
>
> Signed-off-by: Dave Marchevsky <davemarchevsky@...com>
> ---
Love the change, thanks! This has been bothering me for a while now.
type -> atype rename is quite noisy, though. I don't mind it, but I'll
let Daniel decide.
Acked-by: Andrii Nakryiko <andrii@...nel.org>
> include/linux/bpf-cgroup.h | 200 +++++++++++++++++++++++----------
> include/uapi/linux/bpf.h | 2 +-
> kernel/bpf/cgroup.c | 154 +++++++++++++++----------
> net/ipv4/af_inet.c | 6 +-
> net/ipv4/udp.c | 2 +-
> net/ipv6/af_inet6.c | 6 +-
> net/ipv6/udp.c | 2 +-
> tools/include/uapi/linux/bpf.h | 2 +-
> 8 files changed, 243 insertions(+), 131 deletions(-)
>
[...]
> #define BPF_CGROUP_RUN_PROG_INET_SOCK(sk) \
> - BPF_CGROUP_RUN_SK_PROG(sk, BPF_CGROUP_INET_SOCK_CREATE)
> + BPF_CGROUP_RUN_SK_PROG(sk, CG_BPF_CGROUP_INET_SOCK_CREATE)
>
> #define BPF_CGROUP_RUN_PROG_INET_SOCK_RELEASE(sk) \
> - BPF_CGROUP_RUN_SK_PROG(sk, BPF_CGROUP_INET_SOCK_RELEASE)
> + BPF_CGROUP_RUN_SK_PROG(sk, CG_BPF_CGROUP_INET_SOCK_RELEASE)
>
> #define BPF_CGROUP_RUN_PROG_INET4_POST_BIND(sk) \
> - BPF_CGROUP_RUN_SK_PROG(sk, BPF_CGROUP_INET4_POST_BIND)
> + BPF_CGROUP_RUN_SK_PROG(sk, CG_BPF_CGROUP_INET4_POST_BIND)
>
> #define BPF_CGROUP_RUN_PROG_INET6_POST_BIND(sk) \
> - BPF_CGROUP_RUN_SK_PROG(sk, BPF_CGROUP_INET6_POST_BIND)
> + BPF_CGROUP_RUN_SK_PROG(sk, CG_BPF_CGROUP_INET6_POST_BIND)
>
all these macros are candidate for a rewrite to proper (always
inlined) functions, similarly to what I did in [0]. It would make it
much harder to accidentally use wrong constant and will make typing
explicit. But let's see how that change goes first.
[0] https://patchwork.kernel.org/project/netdevbpf/patch/20210730053413.1090371-3-andrii@kernel.org/
> -#define BPF_CGROUP_RUN_SA_PROG(sk, uaddr, type) \
> +#define BPF_CGROUP_RUN_SA_PROG(sk, uaddr, atype) \
> ({ \
> u32 __unused_flags; \
> int __ret = 0; \
> - if (cgroup_bpf_enabled(type)) \
> - __ret = __cgroup_bpf_run_filter_sock_addr(sk, uaddr, type, \
> + if (cgroup_bpf_enabled(atype)) \
> + __ret = __cgroup_bpf_run_filter_sock_addr(sk, uaddr, atype, \
> NULL, \
> &__unused_flags); \
> __ret; \
> })
>
[...]
Powered by blists - more mailing lists