[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZPjm9fcy35JJZj6M@krava>
Date: Wed, 6 Sep 2023 22:54:13 +0200
From: Jiri Olsa <olsajiri@...il.com>
To: Rong Tao <rtoax@...mail.com>
Cc: olsajiri@...il.com, andrii@...nel.org, daniel@...earbox.net,
sdf@...gle.com, Rong Tao <rongtao@...tc.cn>,
Alexei Starovoitov <ast@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Song Liu <song@...nel.org>,
Yonghong Song <yonghong.song@...ux.dev>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>, Hao Luo <haoluo@...gle.com>,
Mykola Lysenko <mykolal@...com>, Shuah Khan <shuah@...nel.org>,
Maxime Coquelin <mcoquelin.stm32@...il.com>,
Alexandre Torgue <alexandre.torgue@...s.st.com>,
Yafang Shao <laoar.shao@...il.com>,
"open list:BPF [GENERAL] (Safe Dynamic Programs and Tools)"
<bpf@...r.kernel.org>, open list <linux-kernel@...r.kernel.org>,
"open list:KERNEL SELFTEST FRAMEWORK"
<linux-kselftest@...r.kernel.org>,
"moderated list:ARM/STM32 ARCHITECTURE"
<linux-stm32@...md-mailman.stormreply.com>,
"moderated list:ARM/STM32 ARCHITECTURE"
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH bpf-next v11 1/2] selftests/bpf: trace_helpers.c:
optimize kallsyms cache
On Tue, Sep 05, 2023 at 10:04:18PM +0800, Rong Tao wrote:
> From: Rong Tao <rongtao@...tc.cn>
>
> Static ksyms often have problems because the number of symbols exceeds the
> MAX_SYMS limit. Like changing the MAX_SYMS from 300000 to 400000 in
> commit e76a014334a6("selftests/bpf: Bump and validate MAX_SYMS") solves
> the problem somewhat, but it's not the perfect way.
>
> This commit uses dynamic memory allocation, which completely solves the
> problem caused by the limitation of the number of kallsyms. At the same
> time, add APIs:
>
> load_kallsyms_local()
> ksym_search_local()
> ksym_get_addr_local()
> free_kallsyms_local()
>
> There are used to solve the problem of selftests/bpf updating kallsyms
> after attach new symbols during testmod testing.
>
> Acked-by: Stanislav Fomichev <sdf@...gle.com>
> Signed-off-by: Rong Tao <rongtao@...tc.cn>
looks good, I added few more coments, with them addressed you can add my
Acked-by: Jiri Olsa <jolsa@...nel.org>
thanks,
jirka
> ---
> v11: Remove useless load_kallsyms_refresh() and modify code some format
> v10: https://lore.kernel.org/lkml/tencent_0A73B402B1D440480838ABF7124CE5EA5505@qq.com/
> Keep the original load_kallsyms().
> v9: https://lore.kernel.org/lkml/tencent_254B7015EED7A5D112C45E033DA1822CF107@qq.com/
> Add load_kallsyms_local,ksym_search_local,ksym_get_addr_local functions.
> v8: https://lore.kernel.org/lkml/tencent_6D23FE187408D965E95DFAA858BC7E8C760A@qq.com/
> Resolves inter-thread contention for ksyms global variables.
> v7: https://lore.kernel.org/lkml/tencent_BD6E19C00BF565CD5C36A9A0BD828CFA210A@qq.com/
> Fix __must_check macro.
> v6: https://lore.kernel.org/lkml/tencent_4A09A36F883A06EA428A593497642AF8AF08@qq.com/
> Apply libbpf_ensure_mem()
> v5: https://lore.kernel.org/lkml/tencent_0E9E1A1C0981678D5E7EA9E4BDBA8EE2200A@qq.com/
> Release the allocated memory once the load_kallsyms_refresh() upon error
> given it's dynamically allocated.
> v4: https://lore.kernel.org/lkml/tencent_59C74613113F0C728524B2A82FE5540A5E09@qq.com/
> Make sure most cases we don't need the realloc() path to begin with,
> and check strdup() return value.
> v3: https://lore.kernel.org/lkml/tencent_50B4B2622FE7546A5FF9464310650C008509@qq.com/
> Do not use structs and judge ksyms__add_symbol function return value.
> v2: https://lore.kernel.org/lkml/tencent_B655EE5E5D463110D70CD2846AB3262EED09@qq.com/
> Do the usual len/capacity scheme here to amortize the cost of realloc, and
> don't free symbols.
> v1: https://lore.kernel.org/lkml/tencent_AB461510B10CD484E0B2F62E3754165F2909@qq.com/
SNIP
> +static int ksyms__add_symbol(struct ksyms *ksyms, const char *name,
> + unsigned long addr)
> +{
> + void *tmp;
> +
> + tmp = strdup(name);
> + if (!tmp)
> + return -ENOMEM;
> + ksyms->syms[ksyms->sym_cnt].addr = addr;
> + ksyms->syms[ksyms->sym_cnt].name = tmp;
> +
> + ksyms->sym_cnt++;
> +
> + return 0;
nit, extra new lines above
> +}
> +
> +void free_kallsyms_local(struct ksyms *ksyms)
> +{
> + unsigned int i;
> +
> + if (!ksyms)
> + return;
> +
> + if (!ksyms->syms) {
> + free(ksyms);
> + return;
> + }
> +
> + for (i = 0; i < ksyms->sym_cnt; i++)
> + free(ksyms->syms[i].name);
> + free(ksyms->syms);
> + free(ksyms);
> +}
>
> static int ksym_cmp(const void *p1, const void *p2)
> {
> return ((struct ksym *)p1)->addr - ((struct ksym *)p2)->addr;
> }
>
> -int load_kallsyms_refresh(void)
> +struct ksyms *load_kallsyms_local(struct ksyms *ksyms)
> {
> FILE *f;
> char func[256], buf[256];
> char symbol;
> void *addr;
> - int i = 0;
> + int ret;
>
> - sym_cnt = 0;
> + /* flush kallsyms, free the previously allocated dynamic memory */
> + free_kallsyms_local(ksyms);
with the removal of the refresh function (from last version) there's
no need now for ksyms argument in load_kallsyms_local
all the current users of load_kallsyms_local are passing NULL arg
>
> f = fopen("/proc/kallsyms", "r");
> if (!f)
> - return -ENOENT;
> + return NULL;
> +
> + ksyms = calloc(1, sizeof(struct ksyms));
> + if (!ksyms)
missing fclose(f);
> + return NULL;
>
> while (fgets(buf, sizeof(buf), f)) {
> if (sscanf(buf, "%p %c %s", &addr, &symbol, func) != 3)
> break;
> if (!addr)
> continue;
> - if (i >= MAX_SYMS)
> - return -EFBIG;
>
> - syms[i].addr = (long) addr;
> - syms[i].name = strdup(func);
> - i++;
> + ret = libbpf_ensure_mem((void **) &ksyms->syms, &ksyms->sym_cap,
> + sizeof(struct ksym), ksyms->sym_cnt + 1);
> + if (ret)
> + goto error;
> + ret = ksyms__add_symbol(ksyms, func, (unsigned long)addr);
> + if (ret)
> + goto error;
> }
> fclose(f);
> - sym_cnt = i;
> - qsort(syms, sym_cnt, sizeof(struct ksym), ksym_cmp);
> - return 0;
> + qsort(ksyms->syms, ksyms->sym_cnt, sizeof(struct ksym), ksym_cmp);
> + return ksyms;
> +
> +error:
> + free_kallsyms_local(ksyms);
missing fclose(f);
> + return NULL;
> }
>
> int load_kallsyms(void)
> {
> - /*
> - * This is called/used from multiplace places,
> - * load symbols just once.
> - */
> - if (sym_cnt)
> - return 0;
> - return load_kallsyms_refresh();
> + if (!ksyms)
> + ksyms = load_kallsyms_local(NULL);
> + return ksyms ? 0 : 1;
> }
>
> -struct ksym *ksym_search(long key)
> +struct ksym *ksym_search_local(struct ksyms *ksyms, long key)
> {
> - int start = 0, end = sym_cnt;
> + int start = 0, end = ksyms->sym_cnt;
> int result;
>
> + if (!ksyms)
> + return NULL;
I don't think we need to check !ksyms in here, you don't do
that check in ksym_get_addr_local and I think it's fine
> +
> /* kallsyms not loaded. return NULL */
> - if (sym_cnt <= 0)
> + if (ksyms->sym_cnt <= 0)
> return NULL;
>
> while (start < end) {
SNIP
> diff --git a/tools/testing/selftests/bpf/trace_helpers.h b/tools/testing/selftests/bpf/trace_helpers.h
> index 876f3e711df6..d6eeec85a5e4 100644
> --- a/tools/testing/selftests/bpf/trace_helpers.h
> +++ b/tools/testing/selftests/bpf/trace_helpers.h
> @@ -11,13 +11,18 @@ struct ksym {
> long addr;
> char *name;
> };
> +struct ksyms;
>
> int load_kallsyms(void);
> -int load_kallsyms_refresh(void);
> -
> struct ksym *ksym_search(long key);
> long ksym_get_addr(const char *name);
>
> +struct ksyms *load_kallsyms_local(struct ksyms *ksyms);
> +struct ksym *ksym_search_local(struct ksyms *ksyms, long key);
> +long ksym_get_addr_local(struct ksyms *ksyms, const char *name);
> +
nit, extra newline
> +void free_kallsyms_local(struct ksyms *ksyms);
> +
> /* open kallsyms and find addresses on the fly, faster than load + search. */
> int kallsyms_find(const char *sym, unsigned long long *addr);
>
> --
> 2.41.0
>
Powered by blists - more mailing lists