[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260128091033.GG3372621@noisy.programming.kicks-ass.net>
Date: Wed, 28 Jan 2026 10:10:33 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Tao Chen <chen.dylane@...ux.dev>
Cc: mingo@...hat.com, acme@...nel.org, namhyung@...nel.org,
mark.rutland@....com, alexander.shishkin@...ux.intel.com,
jolsa@...nel.org, irogers@...gle.com, adrian.hunter@...el.com,
kan.liang@...ux.intel.com, song@...nel.org, ast@...nel.org,
daniel@...earbox.net, andrii@...nel.org, martin.lau@...ux.dev,
eddyz87@...il.com, yonghong.song@...ux.dev,
john.fastabend@...il.com, kpsingh@...nel.org, sdf@...ichev.me,
haoluo@...gle.com, linux-perf-users@...r.kernel.org,
linux-kernel@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [PATCH bpf-next v8 2/3] perf: Refactor get_perf_callchain
On Mon, Jan 26, 2026 at 03:43:30PM +0800, Tao Chen wrote:
> From BPF stack map, we want to ensure that the callchain buffer
> will not be overwritten by other preemptive tasks and we also aim
> to reduce the preempt disable interval, Based on the suggestions from Peter
> and Andrrii, export new API __get_perf_callchain and the usage scenarios
> are as follows from BPF side:
>
> preempt_disable()
> entry = get_callchain_entry()
> preempt_enable()
> __get_perf_callchain(entry)
> put_callchain_entry(entry)
That makes no sense, this means any other task on that CPU is getting
screwed over.
Why are you worried about the preempt_disable() here? If this were an
interrupt context we'd still do that unwind -- but then with IRQs
disabled.
Powered by blists - more mailing lists