lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAHap4zs9yZFx-z2h=vsqgdzfNgVssNvoWZ3VWswtwREZ0DnHsw@mail.gmail.com>
Date:   Wed, 12 Jan 2022 09:26:58 -0500
From:   Mauricio Vásquez Bernal <mauricio@...volk.io>
To:     Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc:     Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Quentin Monnet <quentin@...valent.com>,
        Rafael David Tinoco <rafaeldtinoco@...il.com>,
        Lorenzo Fontana <lorenzo.fontana@...stic.co>,
        Leonardo Di Donato <leonardo.didonato@...stic.co>
Subject: Re: [PATCH bpf-next v3 3/3] bpftool: Implement btfgen

On Wed, Dec 22, 2021 at 7:33 PM Andrii Nakryiko
<andrii.nakryiko@...il.com> wrote:
>
> On Fri, Dec 17, 2021 at 10:57 AM Mauricio Vásquez <mauricio@...volk.io> wrote:
> >
> > The BTFGen's goal is to produce a BTF file that contains **only** the
> > information that is needed by an eBPF program. This algorithm does a
> > first step collecting the types involved for each relocation present on
> > the object and "marking" them as needed. Types are collected in
> > different ways according to the type relocation, for field based
> > relocations only the union and structures members involved are
> > considered, for type based relocations the whole types are added, enum
> > field relocations are not supported in this iteration yet.
> >
> > A second step generates a BTF file from the "marked" types. This step
> > accesses the original BTF file extracting the types and their members
> > that were "marked" as needed in the first step.
> >
> > This command is implemented under the "gen" command in bpftool and the
> > syntax is the following:
> >
> > $ bpftool gen btf INPUT OUTPUT OBJECT(S)
> >
> > INPUT can be either a single BTF file or a folder containing BTF files,
> > when it's a folder, a BTF file is generated for each BTF file contained
> > in this folder. OUTPUT is the file (or folder) where generated files are
> > stored and OBJECT(S) is the list of bpf objects we want to generate the
> > BTF file(s) for (each generated BTF file contains all the types needed
> > by all the objects).
> >
> > Signed-off-by: Mauricio Vásquez <mauricio@...volk.io>
> > Signed-off-by: Rafael David Tinoco <rafael.tinoco@...asec.com>
> > Signed-off-by: Lorenzo Fontana <lorenzo.fontana@...stic.co>
> > Signed-off-by: Leonardo Di Donato <leonardo.didonato@...stic.co>
> > ---
> >  tools/bpf/bpftool/gen.c | 892 ++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 892 insertions(+)
> >
>
> I haven't looked through details of stripping BTF itself, let's
> finalize CO-RE relocation parts first. Maybe for the next revision you
> could split bpftool changes in two in some reasonable way so that each
> patch concentrates on different steps of the process a bit more? E.g.,
> first patch might set up new command and BTF stripping parts but leave
> the CO-RE relocation logic unimplemented, and the second path fills in
> that part. Should make it easier to review this big patch.

I totally agree. Will send v4 with more granular commits.

> Please also cc Quentin Monnet to review bpftool parts as well.

He's already there.

> > +static int btf_reloc_info_gen_type(struct btf_reloc_info *info, struct bpf_core_spec *targ_spec)
> > +{
> > +       struct btf *btf = (struct btf *) info->src_btf;
> > +       struct btf_type *btf_type;
> > +       int err = 0;
> > +
> > +       btf_type = (struct btf_type *) btf__type_by_id(btf, targ_spec->root_type_id);
> > +
> > +       return btf_reloc_put_type_all(btf, info, btf_type, targ_spec->root_type_id);
> > +}
> > +
> > +static int btf_reloc_info_gen_enumval(struct btf_reloc_info *info, struct bpf_core_spec *targ_spec)
> > +{
> > +       p_err("untreated enumval based relocation");
>
> why untreated? what's the problem supporting it?
>

Nothing, we haven't given it any priority. It'll be part of the next iteration.

> > +static int btf_reloc_info_gen(struct btf_reloc_info *info, struct bpf_core_spec *res)
> > +{
> > +       if (core_relo_is_type_based(res->relo_kind))
> > +               return btf_reloc_info_gen_type(info, res);
> > +
> > +       if (core_relo_is_enumval_based(res->relo_kind))
> > +               return btf_reloc_info_gen_enumval(info, res);
> > +
> > +       if (core_relo_is_field_based(res->relo_kind))
> > +               return btf_reloc_info_gen_field(info, res);
>
> you can have a simple switch here instead of exposing libbpf internal helpers
>

Will do.

> > +static int btfgen_obj_reloc_info_gen(struct btf_reloc_info *reloc_info, struct bpf_object *obj)
> > +{
> > +       const struct btf_ext_info_sec *sec;
> > +       const struct bpf_core_relo *rec;
> > +       const struct btf_ext_info *seg;
> > +       struct hashmap *cand_cache;
> > +       int err, insn_idx, sec_idx;
> > +       struct bpf_program *prog;
> > +       struct btf_ext *btf_ext;
> > +       const char *sec_name;
> > +       size_t nr_programs;
> > +       struct btf *btf;
> > +       unsigned int i;
> > +
> > +       btf = bpf_object__btf(obj);
> > +       btf_ext = bpf_object__btf_ext(obj);
> > +
> > +       if (btf_ext->core_relo_info.len == 0)
> > +               return 0;
> > +
> > +       cand_cache = bpf_core_create_cand_cache();
> > +       if (IS_ERR(cand_cache))
> > +               return PTR_ERR(cand_cache);
> > +
> > +       bpf_object_set_vmlinux_override(obj, reloc_info->src_btf);
> > +
> > +       seg = &btf_ext->core_relo_info;
> > +       for_each_btf_ext_sec(seg, sec) {
> > +               bool prog_found;
> > +
> > +               sec_name = btf__name_by_offset(btf, sec->sec_name_off);
> > +               if (str_is_empty(sec_name)) {
> > +                       err = -EINVAL;
> > +                       goto out;
> > +               }
> > +
> > +               prog_found = false;
> > +               nr_programs = bpf_object__get_nr_programs(obj);
> > +               for (i = 0; i < nr_programs; i++)       {
> > +                       prog = bpf_object__get_program(obj, i);
> > +                       if (strcmp(bpf_program__section_name(prog), sec_name) == 0) {
> > +                               prog_found = true;
> > +                               break;
> > +                       }
> > +               }
> > +
> > +               if (!prog_found) {
> > +                       pr_warn("sec '%s': failed to find a BPF program\n", sec_name);
> > +                       err = -EINVAL;
> > +                       goto out;
> > +               }
> > +
> > +               sec_idx = bpf_program__sec_idx(prog);
> > +
> > +               for_each_btf_ext_rec(seg, sec, i, rec) {
> > +                       struct bpf_core_relo_res targ_res;
> > +                       struct bpf_core_spec targ_spec;
> > +
> > +                       insn_idx = rec->insn_off / BPF_INSN_SZ;
> > +
> > +                       prog = find_prog_by_sec_insn(obj, sec_idx, insn_idx);
> > +                       if (!prog) {
> > +                               pr_warn("sec '%s': failed to find program at insn #%d for CO-RE offset relocation #%d\n",
> > +                                       sec_name, insn_idx, i);
> > +                               err = -EINVAL;
> > +                               goto out;
> > +                       }
> > +
> > +                       err = bpf_core_calc_relo_res(prog, rec, i, btf, cand_cache, &targ_res,
> > +                                                    &targ_spec);
>
>
> I don't think you need to do *exactly* what libbpf is doing.
> bpf_core_calc_relo_res() doesn't add much on top of
> bpf_core_calc_relo_insn(), if you use bpf_core_calc_relo_insn()
> directly and expose bpf_core_add_cands/bpf_core_free_cands and use
> them directly as well, bypassing bpf_object completely, you won't need
> btf_vmlinux override and make everything less coupled, I think. In the
> future, if you'd like to support BTF module BTFs, you'll have all the
> necessary flexibility to do that, while if you try to reuse every
> single line of bpf_object's code we'll be adding more hacks like your
> bpf_object_set_vmlinux_override().
>
> Fundamentally, you don't care about bpf_object and bpf_programs. All
> you need to do is parse .BTF.ext (you can do that just btf__parse, no
> need to even construct bpf_object!). Construct candidate cache, then
> iterate each CO-RE record, add find/add candidates, calculate
> relocation result, use it for your algorithm.
>
> Yes, there will be a bit of duplication (candidate search), but that's
> better than trying to turn bpf_object inside out with all the custom
> getters/setters that you are exposing (even if it's libbpf internal
> only, still makes it really hard to reasons what's going on and what
> are consequences of manual control over a lot of bpf_object internal
> implementation details).
>

Thanks a lot for this suggestion. We implemented it this way and the
result is much better.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ