[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <0062e81b59efb3839bd4f9a3459cc54730dbbfb9dd2f688df72d3b4669cab90e@mail.kernel.org>
Date: Fri, 6 Feb 2026 09:34:21 +0000 (UTC)
From: bot+bpf-ci@...nel.org
To: chen.dylane@...ux.dev,song@...nel.org,jolsa@...nel.org,ast@...nel.org,daniel@...earbox.net,andrii@...nel.org,martin.lau@...ux.dev,eddyz87@...il.com,yonghong.song@...ux.dev,john.fastabend@...il.com,kpsingh@...nel.org,sdf@...ichev.me,haoluo@...gle.com
Cc: bpf@...r.kernel.org,linux-kernel@...r.kernel.org,chen.dylane@...ux.dev,ast@...nel.org,andrii@...nel.org,daniel@...earbox.net,martin.lau@...nel.org,eddyz87@...il.com,yonghong.song@...ux.dev,clm@...a.com,ihor.solodrai@...ux.dev
Subject: Re: [PATCH bpf-next v2 2/2] bpf: Add preempt disable for bpf_get_stackid
> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index 1b100a03ef2e..d263f851f084 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
[ ... ]
> @@ -280,12 +297,22 @@ static long __bpf_get_stackid(struct bpf_map *map,
> /* for build_id+offset, pop a bucket before slow cmp */
> new_bucket = (struct stack_map_bucket *)
> pcpu_freelist_pop(&smap->freelist);
> - if (unlikely(!new_bucket))
> - return -ENOMEM;
> + if (unlikely(!new_bucket)) {
> + ret = -ENOMEM;
> + goto go_out;
> + }
> new_bucket->nr = trace_nr;
> id_offs = (struct bpf_stack_build_id *)new_bucket->data;
> for (i = 0; i < trace_nr; i++)
> id_offs[i].ip = ips[i];
> +
> + /*
> + * It is safe after perf callchain ips copied to bucket buffer
> + * to reduce the length of preempt section, we can enable preempt here.
> + */
> + if (!trace_in)
> + preempt_enable();
> +
When the stack_map_use_build_id() path falls through to insert a new entry
(neither the hash_matches early return nor the -EEXIST early return is
taken), this preempt_enable() is called here, and then the code continues
to go_out where preempt_enable() is called again.
The sequence when trace_in == NULL:
preempt_disable() // entering __bpf_get_stackid
...
if (!trace_in)
preempt_enable(); // here, after copying IPs
...
// fall through to insert new entry
new_bucket->hash = hash;
...
xchg(&smap->buckets[id], new_bucket);
...
go_out:
if (!trace_in)
preempt_enable(); // second enable - imbalanced
The two direct returns (return id and return -EEXIST) after the early
preempt_enable() correctly bypass go_out. However, when inserting a new
stack trace (the normal case when no exact match exists and the bucket
can be reused), the code falls through to go_out and calls preempt_enable()
a second time.
Could this cause preempt_count underflow when using BPF_F_STACK_BUILD_ID
maps with bpf_get_stackid()?
> @@ -317,31 +349,21 @@ static long __bpf_get_stackid(struct bpf_map *map,
> old_bucket = xchg(&smap->buckets[id], new_bucket);
> if (old_bucket)
> pcpu_freelist_push(&smap->freelist, &old_bucket->fnode);
> - return id;
> +
> +go_out:
> + if (!trace_in)
> + preempt_enable();
> + return ret;
> }
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21745231308
AI-authorship-score: low
AI-authorship-explanation: The commit shows typical human patterns including a control-flow bug from manual refactoring that AI would likely catch, combined with natural but slightly awkward phrasing in comments.
issues-found: 1
issue-severity-score: high
issue-severity-explanation: Preempt count underflow in the build_id path causes kernel warnings and potential system instability when inserting new stack traces.
Powered by blists - more mailing lists