lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQLaGNF=QypjnM1pJV_-49D3AXaRLpvCb31ozpg_39GO=Q@mail.gmail.com>
Date: Thu, 5 Feb 2026 15:11:19 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Ivan Babrou <ivan@...udflare.com>
Cc: Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>, 
	Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau <martin.lau@...ux.dev>, 
	Eduard Zingerman <eddyz87@...il.com>, Song Liu <song@...nel.org>, 
	Yonghong Song <yonghong.song@...ux.dev>, John Fastabend <john.fastabend@...il.com>, 
	KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...ichev.me>, Hao Luo <haoluo@...gle.com>, 
	Jiri Olsa <jolsa@...nel.org>, bpf <bpf@...r.kernel.org>, 
	LKML <linux-kernel@...r.kernel.org>, kernel-team <kernel-team@...udflare.com>
Subject: Re: [PATCH] kallsyms: cache bpf symbols to avoid quadratic iteration

On Thu, Jan 29, 2026 at 8:50 PM Ivan Babrou via B4 Relay
<devnull+ivan.cloudflare.com@...nel.org> wrote:
>
> From: Ivan Babrou <ivan@...udflare.com>
>
> The existing code iterates the whole list of bpf ksyms until the right
> one is found, which means quadratic complexity on top of linked list
> pointer chasing under rcu. This is't noticeable for most installations,
> but when one has 10000 bpf programs loaded, things start to add up and
> reading from `/proc/kallsyms` slows down a lot.

How real is this? 10k bpf programs?

>  static int get_ksymbol_bpf(struct kallsym_iter *iter)
>  {
> +       unsigned int index = iter->pos - iter->pos_ftrace_mod_end;
>         int ret;
>
> +       if (iter->bpf_cache) {
> +               if (index >= iter->bpf_cache_count) {
> +                       iter->pos_bpf_end = iter->pos;
> +                       return 0;
> +               }
> +
> +               strscpy(iter->module_name, "bpf", MODULE_NAME_LEN);
> +               iter->exported = 0;
> +               strscpy(iter->name, iter->bpf_cache[index].name, KSYM_NAME_LEN);
> +               iter->value = iter->bpf_cache[index].value;
> +               iter->type = BPF_SYM_ELF_TYPE;
> +
> +               return 1;
> +       }

I'm sure there are other ways to speed up get_ksymbol_bpf().
bpf_kallsyms is a list, but the same symbols are also in
struct latch_tree_root bpf_tree
Integrate it into get_ksymbol_bpf().
latch_tree_find() will be surely faster than linear search in the list.
Or go the extra mile and implement an iterator for latch_tree.

pw-bot: cr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ