[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <F03F3D79-0FD3-423F-9905-8A093AE2D9AF@fb.com>
Date: Wed, 31 Jul 2019 00:16:03 +0000
From: Song Liu <songliubraving@...com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>
CC: Andrii Nakryiko <andriin@...com>, bpf <bpf@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Alexei Starovoitov <ast@...com>,
"daniel@...earbox.net" <daniel@...earbox.net>,
Yonghong Song <yhs@...com>, Kernel Team <Kernel-team@...com>
Subject: Re: [PATCH v2 bpf-next 02/12] libbpf: implement BPF CO-RE offset
relocation algorithm
> On Jul 30, 2019, at 4:55 PM, Andrii Nakryiko <andrii.nakryiko@...il.com> wrote:
>
> On Tue, Jul 30, 2019 at 4:44 PM Song Liu <songliubraving@...com> wrote:
>>
>>
>>
>>> On Jul 30, 2019, at 12:53 PM, Andrii Nakryiko <andriin@...com> wrote:
>>>
>>> This patch implements the core logic for BPF CO-RE offsets relocations.
>>> Every instruction that needs to be relocated has corresponding
>>> bpf_offset_reloc as part of BTF.ext. Relocations are performed by trying
>>> to match recorded "local" relocation spec against potentially many
>>> compatible "target" types, creating corresponding spec. Details of the
>>> algorithm are noted in corresponding comments in the code.
>>>
>>> Signed-off-by: Andrii Nakryiko <andriin@...com>
>>> ---
>>> tools/lib/bpf/libbpf.c | 915 ++++++++++++++++++++++++++++++++++++++++-
>>> tools/lib/bpf/libbpf.h | 1 +
>>> 2 files changed, 909 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
>>> index ead915aec349..75da90928257 100644
>>> --- a/tools/lib/bpf/libbpf.c
>>> +++ b/tools/lib/bpf/libbpf.c
>>> @@ -38,6 +38,7 @@
>
> [...]
>
>>>
>>> -static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
>>> - __u32 id)
>>> +static const struct btf_type *
>>> +skip_mods_and_typedefs(const struct btf *btf, __u32 id, __u32 *res_id)
>>> {
>>> const struct btf_type *t = btf__type_by_id(btf, id);
>>>
>>> + if (res_id)
>>> + *res_id = id;
>>> +
>>> while (true) {
>>> switch (BTF_INFO_KIND(t->info)) {
>>> case BTF_KIND_VOLATILE:
>>> case BTF_KIND_CONST:
>>> case BTF_KIND_RESTRICT:
>>> case BTF_KIND_TYPEDEF:
>>> + if (res_id)
>>> + *res_id = t->type;
>>> t = btf__type_by_id(btf, t->type);
>>
>> So btf->types[*res_id] == retval, right? Then with retval and btf, we can
>> calculate *res_id without this change?
>
> Unless I'm missing something very clever here, no. btf->types is array
> of pointers (it's an index into a variable-sized types). This function
> returns `struct btf_type *`, which is one of the **values** stored in
> that array. You are claiming that by having value of one of array
> elements you can easily find element's index? If it was possible to do
> in O(1), we wouldn't have so many algorithms and data structures for
> search and indexing. You can do that only with linear search, not some
> clever pointer arithmetic or at least binary search. So I'm not sure
> what you are proposing here...
oops.. Clearly, I made some silly mistake. Sorry for the noise.
Song
>
> The way BTF is defined, struct btf_type doesn't know its own type ID,
> which is often inconvenient and requires to keep track of that ID, if
> it's necessary, but that's how it is.
>
> But then again, what are we trying to achieve here? Eliminate
> returning id and pointer? I could always return id and easily look up
> pointer, but having both is super convenient and makes code simpler
> and shorter, so I'd like to keep it.
>
>>
>>> break;
>>> default:
>>> @@ -1044,7 +1051,7 @@ static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
>>> static bool get_map_field_int(const char *map_name, const struct btf *btf,
>>> const struct btf_type *def,
>>> const struct btf_member *m, __u32 *res) {
>
> [...]
Powered by blists - more mailing lists