[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8d1f2a05-368e-9f50-8e6f-a8a717517766@fb.com>
Date: Thu, 26 May 2022 09:23:35 -0700
From: Yonghong Song <yhs@...com>
To: Jiri Olsa <jolsa@...nel.org>, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...omium.org>
Subject: Re: [RFC bpf-next] bpf: Use prog->active instead of bpf_prog_active
for kprobe_multi
On 5/25/22 4:40 AM, Jiri Olsa wrote:
> hi,
> Alexei suggested to use prog->active instead global bpf_prog_active
> for programs attached with kprobe multi [1].
prog->active and bpf_prog_active tries to prevent program
recursion and bpf_prog_active provides stronger protection
as it prevent different programs from recursion while prog->active
presents only for the same program.
Currently trampoline based programs use prog->active mechanism
and kprobe, tracepoint and perf.
>
> AFAICS this will bypass bpf_disable_instrumentation, which seems to be
> ok for some places like hash map update, but I'm not sure about other
> places, hence this is RFC post.
>
> I'm not sure how are kprobes different to trampolines in this regard,
> because trampolines use prog->active and it's not a problem.
The following is just my understanding.
In most cases, prog->active should be okay. The only tricky
case might be due to shared maps such that one prog did update/delete
map element and inside the lock in update/delete another
trampoline program is triggered and trying to update/delete the same
map (bucket). But this is a known issue and not a unique issue for
kprobe_multi.
>
> thoughts?
>
> thanks,
> jirka
>
>
> [1] https://lore.kernel.org/bpf/20220316185333.ytyh5irdftjcklk6@ast-mbp.dhcp.thefacebook.com/
> ---
> kernel/trace/bpf_trace.c | 31 +++++++++++++++++++------------
> 1 file changed, 19 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 10b157a6d73e..7aec39ae0a1c 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -2385,8 +2385,8 @@ static u64 bpf_kprobe_multi_entry_ip(struct bpf_run_ctx *ctx)
> }
>
> static int
> -kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
> - unsigned long entry_ip, struct pt_regs *regs)
> +__kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
> + unsigned long entry_ip, struct pt_regs *regs)
> {
> struct bpf_kprobe_multi_run_ctx run_ctx = {
> .link = link,
> @@ -2395,21 +2395,28 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
> struct bpf_run_ctx *old_run_ctx;
> int err;
>
> - if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
> - err = 0;
> - goto out;
> - }
> -
> - migrate_disable();
> - rcu_read_lock();
> old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
> err = bpf_prog_run(link->link.prog, regs);
> bpf_reset_run_ctx(old_run_ctx);
> + return err;
> +}
> +
> +static int
> +kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
> + unsigned long entry_ip, struct pt_regs *regs)
> +{
> + struct bpf_prog *prog = link->link.prog;
> + int err = 0;
> +
> + migrate_disable();
> + rcu_read_lock();
> +
> + if (likely(__this_cpu_inc_return(*(prog->active)) == 1))
> + err = __kprobe_multi_link_prog_run(link, entry_ip, regs);
> +
> + __this_cpu_dec(*(prog->active));
> rcu_read_unlock();
> migrate_enable();
> -
> - out:
> - __this_cpu_dec(bpf_prog_active);
> return err;
> }
>
Powered by blists - more mailing lists