lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c352f357-1417-47b5-9d8c-28d99f20f5a6@linux.dev>
Date: Wed, 5 Nov 2025 14:16:58 -0800
From: Yonghong Song <yonghong.song@...ux.dev>
To: Tao Chen <chen.dylane@...ux.dev>, peterz@...radead.org, mingo@...hat.com,
 acme@...nel.org, namhyung@...nel.org, mark.rutland@....com,
 alexander.shishkin@...ux.intel.com, jolsa@...nel.org, irogers@...gle.com,
 adrian.hunter@...el.com, kan.liang@...ux.intel.com, song@...nel.org,
 ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
 martin.lau@...ux.dev, eddyz87@...il.com, john.fastabend@...il.com,
 kpsingh@...nel.org, sdf@...ichev.me, haoluo@...gle.com
Cc: linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
 bpf@...r.kernel.org
Subject: Re: [PATCH bpf-next v4 2/2] bpf: Hold the perf callchain entry until
 used completely



On 10/28/25 9:25 AM, Tao Chen wrote:
> As Alexei noted, get_perf_callchain() return values may be reused
> if a task is preempted after the BPF program enters migrate disable
> mode. The perf_callchain_entres has a small stack of entries, and
> we can reuse it as follows:
>
> 1. get the perf callchain entry
> 2. BPF use...
> 3. put the perf callchain entry
>
> Signed-off-by: Tao Chen <chen.dylane@...ux.dev>
> ---
>   kernel/bpf/stackmap.c | 61 ++++++++++++++++++++++++++++++++++---------
>   1 file changed, 48 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index e28b35c7e0b..70d38249083 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
> @@ -188,13 +188,12 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs,
>   }
>   
>   static struct perf_callchain_entry *
> -get_callchain_entry_for_task(struct task_struct *task, u32 max_depth)
> +get_callchain_entry_for_task(int *rctx, struct task_struct *task, u32 max_depth)
>   {
>   #ifdef CONFIG_STACKTRACE
>   	struct perf_callchain_entry *entry;
> -	int rctx;
>   
> -	entry = get_callchain_entry(&rctx);
> +	entry = get_callchain_entry(rctx);
>   
>   	if (!entry)
>   		return NULL;
> @@ -216,8 +215,6 @@ get_callchain_entry_for_task(struct task_struct *task, u32 max_depth)
>   			to[i] = (u64)(from[i]);
>   	}
>   
> -	put_callchain_entry(rctx);
> -
>   	return entry;
>   #else /* CONFIG_STACKTRACE */
>   	return NULL;
> @@ -297,6 +294,31 @@ static long __bpf_get_stackid(struct bpf_map *map,
>   	return id;
>   }
>   
> +static struct perf_callchain_entry *
> +bpf_get_perf_callchain(int *rctx, struct pt_regs *regs, bool kernel, bool user,
> +		       int max_stack, bool crosstask)
> +{
> +	struct perf_callchain_entry_ctx ctx;
> +	struct perf_callchain_entry *entry;
> +
> +	entry = get_callchain_entry(rctx);

I think this may not work. Let us say we have two bpf programs
both pinned to a particular cpu (migrate disabled but preempt enabled).
get_callchain_entry() calls get_recursion_context() to get the
buffer for a particulart level.

static inline int get_recursion_context(u8 *recursion)
{
         unsigned char rctx = interrupt_context_level();
         
         if (recursion[rctx])
                 return -1;
         
         recursion[rctx]++;
         barrier();
         
         return rctx;
}

It is possible that both tasks (at process level) may
reach right before "recursion[rctx]++;".
In such cases, both tasks will be able to get
buffer and this is not right.

To fix this, we either need to have preempt disable
in bpf side, or maybe we have some kind of atomic
operation (cmpxchg or similar things), or maybe
has a preempt disable between if statement and recursion[rctx]++,
so only one task can get buffer?


> +	if (unlikely(!entry))
> +		return NULL;
> +
> +	__init_perf_callchain_ctx(&ctx, entry, max_stack, false);
> +	if (kernel)
> +		__get_perf_callchain_kernel(&ctx, regs);
> +	if (user && !crosstask)
> +		__get_perf_callchain_user(&ctx, regs);
> +
> +	return entry;
> +}
> +
> +static void bpf_put_callchain_entry(int rctx)

we have bpf_get_perf_callchain(), maybe rename the above
to bpf_put_perf_callchain()?

> +{
> +	put_callchain_entry(rctx);
> +}
> +

[...]


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ