lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e3e37e53-59ce-4fd8-8e4c-a3c05acda497@linux.dev>
Date: Thu, 29 Jan 2026 00:49:43 +0800
From: Tao Chen <chen.dylane@...ux.dev>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...hat.com, acme@...nel.org, namhyung@...nel.org,
 mark.rutland@....com, alexander.shishkin@...ux.intel.com, jolsa@...nel.org,
 irogers@...gle.com, adrian.hunter@...el.com, kan.liang@...ux.intel.com,
 song@...nel.org, ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
 martin.lau@...ux.dev, eddyz87@...il.com, yonghong.song@...ux.dev,
 john.fastabend@...il.com, kpsingh@...nel.org, sdf@...ichev.me,
 haoluo@...gle.com, linux-perf-users@...r.kernel.org,
 linux-kernel@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [PATCH bpf-next v8 2/3] perf: Refactor get_perf_callchain

在 2026/1/28 17:10, Peter Zijlstra 写道:
> On Mon, Jan 26, 2026 at 03:43:30PM +0800, Tao Chen wrote:
>>  From BPF stack map, we want to ensure that the callchain buffer
>> will not be overwritten by other preemptive tasks and we also aim
>> to reduce the preempt disable interval, Based on the suggestions from Peter
>> and Andrrii, export new API __get_perf_callchain and the usage scenarios
>> are as follows from BPF side:
>>
>> preempt_disable()
>> entry = get_callchain_entry()
>> preempt_enable()
>> __get_perf_callchain(entry)
>> put_callchain_entry(entry)
> 
> That makes no sense, this means any other task on that CPU is getting
> screwed over.
> 
> Why are you worried about the preempt_disable() here? If this were an
> interrupt context we'd still do that unwind -- but then with IRQs
> disabled.

Hi Peter,

Right now, obtaining stack information in BPF includes 2 steps:
1.get callchain
2.store callchain in bpf map or copy to buffer

There is no preempt disable in BPF now, When obtaining the stack 
information of Process A, Process A may be preempted by Process B. With 
the same logic, we then acquire the stack information of Process B. 
However, when execution resumes to Process A, the callchain buffer will 
store the stack information of Process B. Because each context(task, 
soft irq, irq, nmi) has only one callchain entry.

       taskA                             taskB
1.callchain(A) = get_perf_callchain
		<-- preepmted by B   callchain(B) = get_perf_callchain	
2.stack_map(callchain(B))
	

So we want to ensure that when task A is in use, the preepmt task B 
cannot be used. The approach involves deferring the put_callchain_entry 
until the stack is captured and saved in the stack_map.

       taskA                             taskB
1.callchain(A) = __get_perf_callchain
		<-- preepmted by B   callchain(B) = __get_perf_callchain
2.stack_map(callchain(A))
3.put_callchain_entry()		
	
taskB can not get the callchain because taskA hold it.

And the preempt_disable() for get_callchain_entry was suggested from 
Yonghong in v4
https://lore.kernel.org/bpf/c352f357-1417-47b5-9d8c-28d99f20f5a6@linux.dev/

Please correct me if I'm mistaken. Thanks.

-- 
Best Regards
Tao Chen

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ