[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d1ad0b4d-574c-15e5-928f-2d9acc30dfe1@iogearbox.net>
Date: Tue, 15 Aug 2023 17:16:41 +0200
From: Daniel Borkmann <daniel@...earbox.net>
To: Rong Tao <rtoax@...mail.com>, sdf@...gle.com, ast@...nel.org
Cc: rongtao@...tc.cn, Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Song Liu <song@...nel.org>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>, Hao Luo <haoluo@...gle.com>,
Jiri Olsa <jolsa@...nel.org>, Mykola Lysenko <mykolal@...com>,
Shuah Khan <shuah@...nel.org>,
"open list:BPF [GENERAL] (Safe Dynamic Programs and Tools)"
<bpf@...r.kernel.org>,
"open list:KERNEL SELFTEST FRAMEWORK"
<linux-kselftest@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH bpf-next v3] selftests/bpf: trace_helpers.c: optimize
kallsyms cache
On 8/12/23 7:57 AM, Rong Tao wrote:
> From: Rong Tao <rongtao@...tc.cn>
>
> Static ksyms often have problems because the number of symbols exceeds the
> MAX_SYMS limit. Like changing the MAX_SYMS from 300000 to 400000 in
> commit e76a014334a6("selftests/bpf: Bump and validate MAX_SYMS") solves
> the problem somewhat, but it's not the perfect way.
>
> This commit uses dynamic memory allocation, which completely solves the
> problem caused by the limitation of the number of kallsyms.
>
> Signed-off-by: Rong Tao <rongtao@...tc.cn>
> ---
> v3: Do not use structs and judge ksyms__add_symbol function return value.
> v2: https://lore.kernel.org/lkml/tencent_B655EE5E5D463110D70CD2846AB3262EED09@qq.com/
> Do the usual len/capacity scheme here to amortize the cost of realloc, and
> don't free symbols.
> v1: https://lore.kernel.org/lkml/tencent_AB461510B10CD484E0B2F62E3754165F2909@qq.com/
> ---
> tools/testing/selftests/bpf/trace_helpers.c | 42 ++++++++++++++++-----
> 1 file changed, 32 insertions(+), 10 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
> index f83d9f65c65b..d8391a2122b4 100644
> --- a/tools/testing/selftests/bpf/trace_helpers.c
> +++ b/tools/testing/selftests/bpf/trace_helpers.c
> @@ -18,10 +18,32 @@
> #define TRACEFS_PIPE "/sys/kernel/tracing/trace_pipe"
> #define DEBUGFS_PIPE "/sys/kernel/debug/tracing/trace_pipe"
>
> -#define MAX_SYMS 400000
> -static struct ksym syms[MAX_SYMS];
> +static struct ksym *syms;
> +static int sym_cap;
> static int sym_cnt;
>
> +static int ksyms__add_symbol(const char *name, unsigned long addr)
> +{
> + void *tmp;
> + unsigned int new_cap;
> +
> + if (sym_cnt + 1 > sym_cap) {
> + new_cap = sym_cap * 4 / 3;
> + tmp = realloc(syms, sizeof(struct ksym) * new_cap);
> + if (!tmp)
> + return -ENOMEM;
> + syms = tmp;
> + sym_cap = new_cap;
> + }
> +
> + syms[sym_cnt].addr = addr;
> + syms[sym_cnt].name = strdup(name);
Fwiw, strdup() should error check too.. and for teardown in the test suite, lets
also have the counterpart where we release all the allocated mem.
> + sym_cnt++;
> +
> + return 0;
> +}
> +
> static int ksym_cmp(const void *p1, const void *p2)
> {
> return ((struct ksym *)p1)->addr - ((struct ksym *)p2)->addr;
> @@ -33,9 +55,13 @@ int load_kallsyms_refresh(void)
> char func[256], buf[256];
> char symbol;
> void *addr;
> - int i = 0;
> + int ret;
>
> + sym_cap = 1024;
On my dev node, I have:
# cat /proc/kallsyms | wc -l
242586
Why starting out so low with 1k? I would have expected that for most cases we
don't need the realloc() path to begin with, but just in corner cases like in
e76a014334a6.
> sym_cnt = 0;
> + syms = malloc(sizeof(struct ksym) * sym_cap);
> + if (!syms)
> + return -ENOMEM;
>
> f = fopen("/proc/kallsyms", "r");
> if (!f)
> @@ -46,15 +72,11 @@ int load_kallsyms_refresh(void)
> break;
> if (!addr)
> continue;
> - if (i >= MAX_SYMS)
> - return -EFBIG;
> -
> - syms[i].addr = (long) addr;
> - syms[i].name = strdup(func);
> - i++;
> + ret = ksyms__add_symbol(func, (unsigned long)addr);
> + if (ret)
> + return ret;
> }
> fclose(f);
> - sym_cnt = i;
> qsort(syms, sym_cnt, sizeof(struct ksym), ksym_cmp);
> return 0;
> }
>
Powered by blists - more mailing lists