[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a365a7ae-fee1-4148-9b5b-9593fde7f6f7@linux.dev>
Date: Tue, 5 Aug 2025 17:27:58 +0800
From: Tao Chen <chen.dylane@...ux.dev>
To: Jiri Olsa <olsajiri@...il.com>
Cc: ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
martin.lau@...ux.dev, eddyz87@...il.com, song@...nel.org,
yonghong.song@...ux.dev, john.fastabend@...il.com, kpsingh@...nel.org,
sdf@...ichev.me, haoluo@...gle.com, mattbobrowski@...gle.com,
rostedt@...dmis.org, mhiramat@...nel.org, mathieu.desnoyers@...icios.com,
bpf@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org
Subject: Re: [PATCH bpf-next] bpf: Disable migrate when kprobe_multi attach to
access bpf_prog_active
在 2025/8/5 17:07, Jiri Olsa 写道:
> On Mon, Aug 04, 2025 at 10:15:46PM +0800, Tao Chen wrote:
>> 在 2025/8/4 21:02, Jiri Olsa 写道:
>>> On Mon, Aug 04, 2025 at 08:16:15PM +0800, Tao Chen wrote:
>>>> The syscall link_create not protected by bpf_disable_instrumentation,
>>>> accessing percpu data bpf_prog_active should use cpu local_lock when
>>>> kprobe_multi program attach.
>>>>
>>>> Fixes: 0dcac2725406 ("bpf: Add multi kprobe link")
>>>> Signed-off-by: Tao Chen <chen.dylane@...ux.dev>
>>>> ---
>>>> kernel/trace/bpf_trace.c | 4 ++--
>>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
>>>> index 3ae52978cae..f6762552e8e 100644
>>>> --- a/kernel/trace/bpf_trace.c
>>>> +++ b/kernel/trace/bpf_trace.c
>>>> @@ -2728,23 +2728,23 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
>>>> struct pt_regs *regs;
>>>> int err;
>>>> + migrate_disable();
>>>> if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
>>>
>>> this is called all the way from graph tracer, which disables preemption in
>>> function_graph_enter_regs, so I think we can safely use __this_cpu_inc_return
>>>
>>>
>>>> bpf_prog_inc_misses_counter(link->link.prog);
>>>> err = 1;
>>>> goto out;
>>>> }
>>>> - migrate_disable();
>>>
>>> hum, but now I'm not sure why we disable migration in here then
>>>
>>
>> It seems that there is a cant_migrate() check in bpf_prog_run, so it should
>> be disabled before run.
>
> yes, but disabled preemption will take care of that
>
I see, you are right, preempt will pass the check, thanks.
void __cant_migrate(const char *file, int line)
{
static unsigned long prev_jiffy;
if (irqs_disabled())
return;
if (is_migration_disabled(current))
return;
if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
return;
if (preempt_count() > 0)
return;
...
> I think we can do change below plus some comment that Yonghong
> is suggesting in the other reply
>
Yes, i will remove the migrate_disable and add some comment as you
and Yonghong suggested.
> jirka
>
>
> ---
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 3ae52978cae6..74e8d9543c6d 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -2734,14 +2734,12 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
> goto out;
> }
>
> - migrate_disable();
> rcu_read_lock();
> regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr());
> old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx);
> err = bpf_prog_run(link->link.prog, regs);
> bpf_reset_run_ctx(old_run_ctx);
> rcu_read_unlock();
> - migrate_enable();
>
> out:
> __this_cpu_dec(bpf_prog_active);
--
Best Regards
Tao Chen
Powered by blists - more mailing lists