lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 2 Nov 2021 16:26:21 -0500
From:   Mauricio Vásquez Bernal <mauricio@...volk.io>
To:     Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Network Development <netdev@...r.kernel.org>,
        bpf <bpf@...r.kernel.org>, Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Rafael David Tinoco <rafaeldtinoco@...il.com>,
        Lorenzo Fontana <lorenzo.fontana@...stic.co>
Subject: Re: [PATCH bpf-next 0/2] libbpf: Implement BTF Generator API

> Part #2 absolutely doesn't belong in libbpf. Libbpf exposes enough BTF
> constructing APIs to implement this in any application, bpftool or
> otherwise. It's also a relatively straightforward problem: mark used
> types and fields, create a copy of BTF with only those types and
> fields.

Totally agree.

> The last point is important, because to solve the problem 1b (exposing
> CO-RE relo info), the best way to minimize public API commitments is
> to (optionally, probably) request libbpf to record its CO-RE relo
> decisions. Here's what I propose, specifically:
>   1. Add something like "bool record_core_relo_info" (awful name,
> don't use it) in bpf_object_open_opts.
>   2. If it is set to true, libbpf will keep a "log" of CO-RE
> relocation decisions, recording stuff like program name, instruction
> index, local spec (i.e., root_type_id, spec string, relo kind, maybe
> something else), target spec (kernel type_id, kernel spec string, also
> module ID, if it's not vmlinux BTF). We can also record relocated
> value (i.e., field offset, actual enum value, true/false for
> existence, etc). All these are stable concepts, so I'd feel more
> comfortable exposing them, compared to stuff like bpf_core_accessor
> and other internal details.
>   3. The memory for all that will be managed by libbpf for simplicity
> of an API, and we'll expose accessors to get those arrays (at object
> level or per-program level is TBD).
>   4. This info will be available after the prepare() step and will be
> discarded either at create_maps() or load().

I like all this proposal. It fits very well with the BTFGen use case.

Regarding the information to expose, IIUC that'd be slight versions of
struct bpf_core_relo_res and struct bpf_core_spec. I think we could
expose the following structures and a function to get it (please
ignore the naming for now):

```
/* reduced version of struct bpf_core_spec */
struct bpf_core_spec_pub {
const struct btf *btf;
__u32 root_type_id;
enum bpf_core_relo_kind kind;
/* raw, low-level spec: 1-to-1 with accessor spec string */ --> we can
also use access_str_off and let the user parse it
int raw_spec[BPF_CORE_SPEC_MAX_LEN];
/* raw spec length */
int raw_len;
};

struct bpf_core_relo_pub {
const char *prog_name; --> if we expose it by program then it's not needed.
int insn_idx;

bool poison; --> allows the user to understand if the relocation
succeeded or not.

/* new field offset for field based core relos */
__u32 new_offset;

// TODO: fields for type and enum-based relos

struct bpf_core_spec_pub local_spec, targ_spec; --> BTFGen only needs
targ_spec, I suppose local spec would be useful for other use cases.
};

LIBBPF_API struct bpf_core_relo_pub *bpf_program__core_relos(struct
bpf_program *prog);
```

I don't have strong opinions about exposing it by object or by
program. Both cases should work the same for BTFGen.

Does it make sense to you?

Btw, I'm probably not the right person to give opinions about this API
splitment. I'd be happy to have other opinions here and to make this
change once we agree on a path forward.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ