lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6a931aca-1aad-41e4-8449-89f48121abba@linux.dev>
Date: Tue, 5 Aug 2025 20:28:48 +0800
From: Tao Chen <chen.dylane@...ux.dev>
To: Yonghong Song <yonghong.song@...ux.dev>, Jiri Olsa <olsajiri@...il.com>
Cc: ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
 martin.lau@...ux.dev, eddyz87@...il.com, song@...nel.org,
 john.fastabend@...il.com, kpsingh@...nel.org, sdf@...ichev.me,
 haoluo@...gle.com, mattbobrowski@...gle.com, rostedt@...dmis.org,
 mhiramat@...nel.org, mathieu.desnoyers@...icios.com, bpf@...r.kernel.org,
 linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org
Subject: Re: [PATCH bpf-next] bpf: Disable migrate when kprobe_multi attach to
 access bpf_prog_active

在 2025/8/5 12:05, Yonghong Song 写道:
> 
> 
> On 8/4/25 6:02 AM, Jiri Olsa wrote:
>> On Mon, Aug 04, 2025 at 08:16:15PM +0800, Tao Chen wrote:
>>> The syscall link_create not protected by bpf_disable_instrumentation,
>>> accessing percpu data bpf_prog_active should use cpu local_lock when
>>> kprobe_multi program attach.
>>>
>>> Fixes: 0dcac2725406 ("bpf: Add multi kprobe link")
>>> Signed-off-by: Tao Chen <chen.dylane@...ux.dev>
>>> ---
>>>   kernel/trace/bpf_trace.c | 4 ++--
>>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
>>> index 3ae52978cae..f6762552e8e 100644
>>> --- a/kernel/trace/bpf_trace.c
>>> +++ b/kernel/trace/bpf_trace.c
>>> @@ -2728,23 +2728,23 @@ kprobe_multi_link_prog_run(struct 
>>> bpf_kprobe_multi_link *link,
>>>       struct pt_regs *regs;
>>>       int err;
>>> +    migrate_disable();
>>>       if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
>> this is called all the way from graph tracer, which disables 
>> preemption in
>> function_graph_enter_regs, so I think we can safely use 
>> __this_cpu_inc_return
> 
> Agree. migrate_disable() is not needed here. But it would be great to 
> add some
> comments here since for most other prog_run, they typically have 
> migrate_disable/enable.
> 
>>
>>
>>>           bpf_prog_inc_misses_counter(link->link.prog);
>>>           err = 1;
>>>           goto out;
>>>       }
>>> -    migrate_disable();
>> hum, but now I'm not sure why we disable migration in here then
> 
> Probably a oversight.
> 
>>
>> jirka
>>
>>>       rcu_read_lock();
>>>       regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr());
>>>       old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx);
>>>       err = bpf_prog_run(link->link.prog, regs);
>>>       bpf_reset_run_ctx(old_run_ctx);
>>>       rcu_read_unlock();
>>> -    migrate_enable();
>>>    out:
>>>       __this_cpu_dec(bpf_prog_active);
>>> +    migrate_enable();
>>>       return err;
>>>   }
>>> -- 
>>> 2.48.1
>>>
> 

Hi Jiri, Yonghong,

I send another patch as you suggested, pls review it, thanks.

https://lore.kernel.org/bpf/20250805122312.1890951-1-chen.dylane@linux.dev

-- 
Best Regards
Tao Chen

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ