[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzYrRSB2wSYVmMCGA80RY6Hy2Chtt3MnXFy7+-Feh+2FBw@mail.gmail.com>
Date: Mon, 11 Apr 2022 15:15:32 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Jiri Olsa <jolsa@...nel.org>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
lkml <linux-kernel@...r.kernel.org>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...omium.org>
Subject: Re: [RFC bpf-next 3/4] bpf: Resolve symbols with kallsyms_lookup_names
for kprobe multi link
On Thu, Apr 7, 2022 at 5:53 AM Jiri Olsa <jolsa@...nel.org> wrote:
>
> Using kallsyms_lookup_names function to speed up symbols lookup in
> kprobe multi link attachment and replacing with it the current
> kprobe_multi_resolve_syms function.
>
> This speeds up bpftrace kprobe attachment:
>
> # perf stat -r 5 -e cycles ./src/bpftrace -e 'kprobe:x* { } i:ms:1 { exit(); }'
> ...
> 6.5681 +- 0.0225 seconds time elapsed ( +- 0.34% )
>
> After:
>
> # perf stat -r 5 -e cycles ./src/bpftrace -e 'kprobe:x* { } i:ms:1 { exit(); }'
> ...
> 0.5661 +- 0.0275 seconds time elapsed ( +- 4.85% )
>
> Signed-off-by: Jiri Olsa <jolsa@...nel.org>
> ---
> kernel/trace/bpf_trace.c | 123 +++++++++++++++++++++++----------------
> 1 file changed, 73 insertions(+), 50 deletions(-)
>
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index b26f3da943de..2602957225ba 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -2226,6 +2226,72 @@ struct bpf_kprobe_multi_run_ctx {
> unsigned long entry_ip;
> };
>
> +struct user_syms {
> + const char **syms;
> + char *buf;
> +};
> +
> +static int copy_user_syms(struct user_syms *us, void __user *usyms, u32 cnt)
> +{
> + const char __user **usyms_copy = NULL;
> + const char **syms = NULL;
> + char *buf = NULL, *p;
> + int err = -EFAULT;
> + unsigned int i;
> + size_t size;
> +
> + size = cnt * sizeof(*usyms_copy);
> +
> + usyms_copy = kvmalloc(size, GFP_KERNEL);
> + if (!usyms_copy)
> + return -ENOMEM;
do you really need usyms_copy? why not just read one pointer at a time?
> +
> + if (copy_from_user(usyms_copy, usyms, size))
> + goto error;
> +
> + err = -ENOMEM;
> + syms = kvmalloc(size, GFP_KERNEL);
> + if (!syms)
> + goto error;
> +
> + /* TODO this potentially allocates lot of memory (~6MB in my tests
> + * with attaching ~40k functions). I haven't seen this to fail yet,
> + * but it could be changed to allocate memory gradually if needed.
> + */
> + size = cnt * KSYM_NAME_LEN;
this reassignment of size is making it hard to follow the code, you
can just do cnt * KSYM_NAME_LEN inside kvmalloc, you don't ever use it
anywhere else
> + buf = kvmalloc(size, GFP_KERNEL);
> + if (!buf)
> + goto error;
> +
> + for (p = buf, i = 0; i < cnt; i++) {
like here, before doing strncpy_from_user() you can read usyms[i] from
user-space into temporary variable, no need for extra kvmalloc?
> + err = strncpy_from_user(p, usyms_copy[i], KSYM_NAME_LEN);
> + if (err == KSYM_NAME_LEN)
> + err = -E2BIG;
> + if (err < 0)
> + goto error;
> + syms[i] = p;
> + p += err + 1;
> + }
> +
> + err = 0;
> + us->syms = syms;
> + us->buf = buf;
> +
[...]
Powered by blists - more mailing lists