[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAErzpmtu7UuP9ttf1oQSuVh6f4BAkKsmfZBjj_+OHs9-oDUfjQ@mail.gmail.com>
Date: Wed, 5 Nov 2025 21:48:42 +0800
From: Donglin Peng <dolinux.peng@...il.com>
To: Eduard Zingerman <eddyz87@...il.com>, Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc: ast@...nel.org, linux-kernel@...r.kernel.org, bpf@...r.kernel.org,
Alan Maguire <alan.maguire@...cle.com>, Song Liu <song@...nel.org>,
pengdonglin <pengdonglin@...omi.com>
Subject: Re: [RFC PATCH v4 3/7] libbpf: Optimize type lookup with binary
search for sorted BTF
On Wed, Nov 5, 2025 at 9:17 AM Eduard Zingerman <eddyz87@...il.com> wrote:
>
> On Tue, 2025-11-04 at 16:54 -0800, Andrii Nakryiko wrote:
> > On Tue, Nov 4, 2025 at 4:19 PM Eduard Zingerman <eddyz87@...il.com> wrote:
> > >
> > > On Tue, 2025-11-04 at 16:11 -0800, Andrii Nakryiko wrote:
> > >
> > > [...]
> > >
> > > > > @@ -897,44 +903,134 @@ int btf__resolve_type(const struct btf *btf, __u32 type_id)
> > > > > return type_id;
> > > > > }
> > > > >
> > > > > -__s32 btf__find_by_name(const struct btf *btf, const char *type_name)
> > > > > +/*
> > > > > + * Find BTF types with matching names within the [left, right] index range.
> > > > > + * On success, updates *left and *right to the boundaries of the matching range
> > > > > + * and returns the leftmost matching index.
> > > > > + */
> > > > > +static __s32 btf_find_type_by_name_bsearch(const struct btf *btf, const char *name,
> > > > > + __s32 *left, __s32 *right)
> > > >
> > > > I thought we discussed this, why do you need "right"? Two binary
> > > > searches where one would do just fine.
> > >
> > > I think the idea is that there would be less strcmp's if there is a
> > > long sequence of items with identical names.
> >
> > Sure, it's a tradeoff. But how long is the set of duplicate name
> > entries we expect in kernel BTF? Additional O(logN) over 70K+ types
> > with high likelihood will take more comparisons.
>
> $ bpftool btf dump file vmlinux | grep '^\[' | awk '{print $3}' | sort | uniq -c | sort -k1nr | head
> 51737 '(anon)'
> 277 'bpf_kfunc'
> 4 'long
> 3 'perf_aux_event'
> 3 'workspace'
> 2 'ata_acpi_gtm'
> 2 'avc_cache_stats'
> 2 'bh_accounting'
> 2 'bp_cpuinfo'
> 2 'bpf_fastcall'
>
> 'bpf_kfunc' is probably for decl_tags.
> So I agree with you regarding the second binary search, it is not
> necessary. But skipping all anonymous types (and thus having to
> maintain nr_sorted_types) might be useful, on each search two
> iterations would be wasted to skip those.
Thank you. After removing the redundant iterations, performance increased
significantly compared with two iterations.
Test Case: Locate all 58,719 named types in vmlinux BTF
Methodology:
./vmtest.sh -- ./test_progs -t btf_permute/perf -v
Two iterations:
| Condition | Lookup Time | Improvement |
|--------------------|-------------|-------------|
| Unsorted (Linear) | 17,282 ms | Baseline |
| Sorted (Binary) | 19 ms | 909x faster |
One iteration:
Results:
| Condition | Lookup Time | Improvement |
|--------------------|-------------|-------------|
| Unsorted (Linear) | 17,619 ms | Baseline |
| Sorted (Binary) | 10 ms | 1762x faster |
Here is the code implementation with a single iteration approach.
I believe this scenario differs from find_linfo because we cannot
determine in advance whether the specified type name will be found.
Please correct me if I've misunderstood anything, and I welcome any
guidance on this matter.
static __s32 btf_find_type_by_name_bsearch(const struct btf *btf,
const char *name,
__s32 start_id)
{
const struct btf_type *t;
const char *tname;
__s32 l, r, m, lmost = -ENOENT;
int ret;
/* found the leftmost btf_type that matches */
l = start_id;
r = btf__type_cnt(btf) - 1;
while (l <= r) {
m = l + (r - l) / 2;
t = btf_type_by_id(btf, m);
if (!t->name_off) {
ret = 1;
} else {
tname = btf__str_by_offset(btf, t->name_off);
ret = !tname ? 1 : strcmp(tname, name);
}
if (ret < 0) {
l = m + 1;
} else {
if (ret == 0)
lmost = m;
r = m - 1;
}
}
return lmost;
}
static __s32 btf_find_type_by_name_kind(const struct btf *btf, int start_id,
const char *type_name, __u32 kind)
{
const struct btf_type *t;
const char *tname;
int err = -ENOENT;
__u32 total;
if (!btf)
goto out;
if (start_id < btf->start_id) {
err = btf_find_type_by_name_kind(btf->base_btf, start_id,
type_name, kind);
if (err == -ENOENT)
start_id = btf->start_id;
}
if (err == -ENOENT) {
if (btf_check_sorted((struct btf *)btf)) {
/* binary search */
bool skip_first;
int ret;
/* return the leftmost with maching names */
ret = btf_find_type_by_name_bsearch(btf,
type_name, start_id);
if (ret < 0)
goto out;
/* skip kind checking */
if (kind == -1)
return ret;
total = btf__type_cnt(btf);
skip_first = true;
do {
t = btf_type_by_id(btf, ret);
if (btf_kind(t) != kind) {
if (skip_first) {
skip_first = false;
continue;
}
} else if (skip_first) {
return ret;
}
if (!t->name_off)
break;
tname = btf__str_by_offset(btf, t->name_off);
if (tname && !strcmp(tname, type_name))
return ret;
else
break;
} while (++ret < total);
} else {
/* linear search */
...
}
}
out:
return err;
}
Powered by blists - more mailing lists