lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 2 Nov 2020 22:01:55 -0800
From:   Andrii Nakryiko <andrii.nakryiko@...il.com>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:     Andrii Nakryiko <andrii@...nel.org>, bpf <bpf@...r.kernel.org>,
        Networking <netdev@...r.kernel.org>,
        Alexei Starovoitov <ast@...com>,
        Daniel Borkmann <daniel@...earbox.net>,
        Kernel Team <kernel-team@...com>,
        Andrii Nakryiko <andriin@...com>
Subject: Re: [PATCH bpf-next 03/11] libbpf: unify and speed up BTF string deduplication

On Mon, Nov 2, 2020 at 8:59 PM Alexei Starovoitov
<alexei.starovoitov@...il.com> wrote:
>
> On Wed, Oct 28, 2020 at 05:58:54PM -0700, Andrii Nakryiko wrote:
> > From: Andrii Nakryiko <andriin@...com>
> >
> > Revamp BTF dedup's string deduplication to match the approach of writable BTF
> > string management. This allows to transfer deduplicated strings index back to
> > BTF object after deduplication without expensive extra memory copying and hash
> > map re-construction. It also simplifies the code and speeds it up, because
> > hashmap-based string deduplication is faster than sort + unique approach.
> >
> > Signed-off-by: Andrii Nakryiko <andriin@...com>
> > ---
> >  tools/lib/bpf/btf.c | 265 +++++++++++++++++---------------------------
> >  1 file changed, 99 insertions(+), 166 deletions(-)
> >
> > diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> > index 89fecfe5cb2b..db9331fea672 100644
> > --- a/tools/lib/bpf/btf.c
> > +++ b/tools/lib/bpf/btf.c
> > @@ -90,6 +90,14 @@ struct btf {
> >       struct hashmap *strs_hash;
> >       /* whether strings are already deduplicated */
> >       bool strs_deduped;
> > +     /* extra indirection layer to make strings hashmap work with stable
> > +      * string offsets and ability to transparently choose between
> > +      * btf->strs_data or btf_dedup->strs_data as a source of strings.
> > +      * This is used for BTF strings dedup to transfer deduplicated strings
> > +      * data back to struct btf without re-building strings index.
> > +      */
> > +     void **strs_data_ptr;
>
> I thought one of the ideas of dedup algo was that strings were deduped first,
> so there is no need to rebuild them.

Ugh.. many things to unpack here. Let's try.

Yes, the idea of dedup is to have only unique strings. But we always
were rebuilding strings during dedup, here we are just changing the
algorithm for string dedup from sort+uniq to hash table. We were
deduping strings unconditionally because we don't know how the BTF
strings section was created in the first place and if it's already
deduplicated or not. So we had to always do it before.

With BTF write APIs the situation became a bit more nuanced. If we
create BTF programmatically from scratch (btf_new_empty()), then
libbpf guarantees (by construction) that all added strings are
auto-deduped. In such a case btf->strs_deduped will be set to true and
during btf_dedup() we'll skip string deduplication. It's purely a
performance improvement and it benefits the main btf_dedup workflow in
pahole.

But if ready-built BTF was loaded from somewhere first and then
modified with BTF write APIs, then it's a bit different. For existing
strings, when we transition from read-only BTF to writable BTF, we
build string lookup hashmap, but we don't deduplicate and remap string
offsets. So if loaded BTF had string duplicates, it will continue
having string duplicates. The string lookup index will pick arbitrary
instance of duplicated string as a unique key, but strings data will
still have duplicates and there will be types that still reference
duplicated string. Until (and if) we do btf_dedup(). At that time
we'll create another unique hash table *and* will remap all string
offsets across all types.

I did it this way intentionally (not remapping strings when doing
read-only -> writable BTF transition) to not accidentally corrupt
.BTF.ext strings. If I were to do full string dedup for r/o ->
writable transition, I'd need to add APIs to "link" struct btf_ext to
struct btf, so that libbpf could remap .BTF.ext strings transparently.
But I didn't want to add those APIs (yet) and didn't want to deal with
mutable struct btf_ext (yet).

So, in short, for strings dedup fundamentally nothing changed at all.

> Then split BTF cannot touch base BTF strings and they're immutable.

This is exactly the case right now. Nothing in base BTF changes, ever.

> But the commit log is talking about transfer of strings and
> hash map re-construction? Why split BTF would reconstruct anything?

This transfer of strings is for split BTF's strings data only. In
general case, we have some unknown strings data in split BTF. When we
do dedup, we need to make sure that split BTF strings are deduplicated
(we don't touch base BTF strings at all). For that we need to
construct a new hashmap. Once we constructed it, we have new strings
data with deduplicated strings, so to avoid creating another big copy
for struct btf, we just "transfer" that data to struct btf from struct
btf_dedup. void **strs_data_ptr just allows reusing the same (already
constructed) hashmap, same underlying blog of deduplicated string
data, same hashing and equality functions.

> It either finds a string in a base BTF or adds to its own strings section.
> Is it all due to switch to hash? The speedup motivation is clear, but then
> it sounds like that the speedup is causing all these issues.
> The strings could have stayed as-is. Just a bit slower ?

Previously we were able to rewrite strings in-place and strings data
was never reallocated (because BTF was read-only always). So it was
all a bit simpler. By using double-indirection we don't have to build
a third hashmap once we are done with strings dedup, we just replace
struct btf's own string lookup hashmap and string data memory.
Alternative is another expensive memory allocation and potentially
pretty big hashmap copy.

Apart from double indirection, the algorithm is much simpler now. If I
were writing original BTF dedup in C++, I'd use a hashmap approach
back then. But we didn't have hashmap in libbpf yet, so sort + uniq
was chosen.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ