lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzZL_7dGmuzt-weids8FMJc5Tph+-om2d9zgQGvd+yC82Q@mail.gmail.com>
Date:   Fri, 19 Nov 2021 09:25:16 -0800
From:   Andrii Nakryiko <andrii.nakryiko@...il.com>
To:     Mauricio Vásquez <mauricio@...volk.io>
Cc:     Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Rafael David Tinoco <rafaeldtinoco@...il.com>,
        Lorenzo Fontana <lorenzo.fontana@...stic.co>,
        Leonardo Di Donato <leonardo.didonato@...stic.co>
Subject: Re: [PATCH bpf-next v2 2/4] libbpf: Introduce 'btf_custom' to 'bpf_obj_open_opts'

On Tue, Nov 16, 2021 at 8:42 AM Mauricio Vásquez <mauricio@...volk.io> wrote:
>
> Commit 1373ff599556 ("libbpf: Introduce 'btf_custom_path' to
> 'bpf_obj_open_opts'") introduced btf_custom_path which allows developers
> to specify a BTF file path to be used for CO-RE relocations. This
> implementation parses and releases the BTF file for each bpf object.
>
> This commit introduces a new 'btf_custom' option to allow users to
> specify directly the btf object instead of the path. This avoids
> parsing/releasing the same BTF file multiple times when the application
> loads multiple bpf objects.
>
> Our specific use case is BTFGen[0], where we want to reuse the same BTF
> file with multiple bpf objects. In this case passing btf_custom_path is
> not only inefficient but it also complicates the implementation as we
> want to save pointers of BTF types but they are invalidated after the
> bpf object is closed with bpf_object__close().

How much slower and harder is it in practice, though? Can you please
provide some numbers? How many objects are going to reuse the same
struct btf? Parsing raw BTF file is quite efficient, I'm curious
what's the scale where this becomes unacceptable.


>
> [0]: https://github.com/kinvolk/btfgen/
>
> Signed-off-by: Mauricio Vásquez <mauricio@...volk.io>
> Signed-off-by: Rafael David Tinoco <rafael.tinoco@...asec.com>
> Signed-off-by: Lorenzo Fontana <lorenzo.fontana@...stic.co>
> Signed-off-by: Leonardo Di Donato <leonardo.didonato@...stic.co>
> ---
>  tools/lib/bpf/libbpf.c | 20 ++++++++++++++++----
>  tools/lib/bpf/libbpf.h |  9 ++++++++-
>  2 files changed, 24 insertions(+), 5 deletions(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index de7e09a6b5ec..6ca76365c6da 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -542,6 +542,8 @@ struct bpf_object {
>         char *btf_custom_path;
>         /* vmlinux BTF override for CO-RE relocations */
>         struct btf *btf_vmlinux_override;
> +       /* true when the user provided the btf structure with the btf_custom opt */
> +       bool user_provided_btf_vmlinux;
>         /* Lazily initialized kernel module BTFs */
>         struct module_btf *btf_modules;
>         bool btf_modules_loaded;
> @@ -2886,7 +2888,7 @@ static int bpf_object__load_vmlinux_btf(struct bpf_object *obj, bool force)
>         int err;
>
>         /* btf_vmlinux could be loaded earlier */
> -       if (obj->btf_vmlinux || obj->gen_loader)
> +       if (obj->btf_vmlinux || obj->btf_vmlinux_override || obj->gen_loader)
>                 return 0;
>
>         if (!force && !obj_needs_vmlinux_btf(obj))
> @@ -5474,7 +5476,7 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
>         if (obj->btf_ext->core_relo_info.len == 0)
>                 return 0;
>
> -       if (targ_btf_path) {
> +       if (!obj->user_provided_btf_vmlinux && targ_btf_path) {
>                 obj->btf_vmlinux_override = btf__parse(targ_btf_path, NULL);
>                 err = libbpf_get_error(obj->btf_vmlinux_override);
>                 if (err) {
> @@ -5543,8 +5545,10 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
>
>  out:
>         /* obj->btf_vmlinux and module BTFs are freed after object load */
> -       btf__free(obj->btf_vmlinux_override);
> -       obj->btf_vmlinux_override = NULL;
> +       if (!obj->user_provided_btf_vmlinux) {
> +               btf__free(obj->btf_vmlinux_override);
> +               obj->btf_vmlinux_override = NULL;
> +       }
>
>         if (!IS_ERR_OR_NULL(cand_cache)) {
>                 hashmap__for_each_entry(cand_cache, entry, i) {
> @@ -6767,6 +6771,10 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
>         if (!OPTS_VALID(opts, bpf_object_open_opts))
>                 return ERR_PTR(-EINVAL);
>
> +       /* btf_custom_path and btf_custom can't be used together */
> +       if (OPTS_GET(opts, btf_custom_path, NULL) && OPTS_GET(opts, btf_custom, NULL))
> +               return ERR_PTR(-EINVAL);
> +
>         obj_name = OPTS_GET(opts, object_name, NULL);
>         if (obj_buf) {
>                 if (!obj_name) {
> @@ -6796,6 +6804,10 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
>                 }
>         }
>
> +       obj->btf_vmlinux_override = OPTS_GET(opts, btf_custom, NULL);
> +       if (obj->btf_vmlinux_override)
> +               obj->user_provided_btf_vmlinux = true;
> +
>         kconfig = OPTS_GET(opts, kconfig, NULL);
>         if (kconfig) {
>                 obj->kconfig = strdup(kconfig);
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index 4ec69f224342..908ab04dc9bd 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -104,8 +104,15 @@ struct bpf_object_open_opts {
>          * struct_ops, etc) will need actual kernel BTF at /sys/kernel/btf/vmlinux.
>          */
>         const char *btf_custom_path;
> +       /* Pointer to the custom BTF object to be used for BPF CO-RE relocations.
> +        * This custom BTF completely replaces the use of vmlinux BTF
> +        * for the purpose of CO-RE relocations.
> +        * NOTE: any other BPF feature (e.g., fentry/fexit programs,
> +        * struct_ops, etc) will need actual kernel BTF at /sys/kernel/btf/vmlinux.
> +        */
> +       struct btf *btf_custom;
>  };
> -#define bpf_object_open_opts__last_field btf_custom_path
> +#define bpf_object_open_opts__last_field btf_custom
>
>  LIBBPF_API struct bpf_object *bpf_object__open(const char *path);
>  LIBBPF_API struct bpf_object *
> --
> 2.25.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ