[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220223145840.64f708ed2357c89039f55f07@kernel.org>
Date: Wed, 23 Feb 2022 14:58:40 +0900
From: Masami Hiramatsu <mhiramat@...nel.org>
To: Jiri Olsa <jolsa@...nel.org>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>, netdev@...r.kernel.org,
bpf@...r.kernel.org, lkml <linux-kernel@...r.kernel.org>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...omium.org>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH 02/10] bpf: Add multi kprobe link
Hi Jiri,
On Tue, 22 Feb 2022 18:05:52 +0100
Jiri Olsa <jolsa@...nel.org> wrote:
[snip]
> +
> +static void
> +kprobe_multi_link_handler(struct fprobe *fp, unsigned long entry_ip,
> + struct pt_regs *regs)
> +{
> + unsigned long saved_ip = instruction_pointer(regs);
> + struct bpf_kprobe_multi_link *link;
> +
> + /*
> + * Because fprobe's regs->ip is set to the next instruction of
> + * dynamic-ftrace instruction, correct entry ip must be set, so
> + * that the bpf program can access entry address via regs as same
> + * as kprobes.
> + */
> + instruction_pointer_set(regs, entry_ip);
This is true for the entry_handler, but false for the exit_handler,
because entry_ip points the probed function address, not the
return address. Thus, when this is done in the exit_handler,
the bpf prog seems to be called from the entry of the function,
not return.
If it is what you expected, please explictly comment it to
avoid confusion. Or, make another handler function for exit
probing.
> +
> + link = container_of(fp, struct bpf_kprobe_multi_link, fp);
> + kprobe_multi_link_prog_run(link, regs);
> +
> + instruction_pointer_set(regs, saved_ip);
> +}
> +
> +static int
> +kprobe_multi_resolve_syms(const void *usyms, u32 cnt,
> + unsigned long *addrs)
> +{
> + unsigned long addr, size;
> + const char **syms;
> + int err = -ENOMEM;
> + unsigned int i;
> + char *func;
> +
> + size = cnt * sizeof(*syms);
> + syms = kvzalloc(size, GFP_KERNEL);
> + if (!syms)
> + return -ENOMEM;
> +
> + func = kmalloc(KSYM_NAME_LEN, GFP_KERNEL);
> + if (!func)
> + goto error;
> +
> + if (copy_from_user(syms, usyms, size)) {
> + err = -EFAULT;
> + goto error;
> + }
> +
> + for (i = 0; i < cnt; i++) {
> + err = strncpy_from_user(func, syms[i], KSYM_NAME_LEN);
> + if (err == KSYM_NAME_LEN)
> + err = -E2BIG;
> + if (err < 0)
> + goto error;
> +
> + err = -EINVAL;
> + if (func[0] == '\0')
> + goto error;
> + addr = kallsyms_lookup_name(func);
> + if (!addr)
> + goto error;
> + if (!kallsyms_lookup_size_offset(addr, &size, NULL))
> + size = MCOUNT_INSN_SIZE;
Note that this is good for x86, but may not be good for other arch
which use some preparation instructions before mcount call.
Maybe you can just reject it if kallsyms_lookup_size_offset() fails.
Thank you,
--
Masami Hiramatsu <mhiramat@...nel.org>
Powered by blists - more mailing lists