[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0165bf55-4a46-4e75-91df-644b0281b247@arnaud-lcm.com>
Date: Mon, 25 Aug 2025 21:07:10 +0100
From: "Lecomte, Arnaud" <contact@...aud-lcm.com>
To: Yonghong Song <yonghong.song@...ux.dev>,
Martin KaFai Lau <martin.lau@...ux.dev>
Cc: andrii@...nel.org, ast@...nel.org, bpf@...r.kernel.org,
daniel@...earbox.net, eddyz87@...il.com, haoluo@...gle.com,
john.fastabend@...il.com, jolsa@...nel.org, kpsingh@...nel.org,
linux-kernel@...r.kernel.org, sdf@...ichev.me,
syzbot+c9b724fbb41cf2538b7b@...kaller.appspotmail.com,
syzkaller-bugs@...glegroups.com, song@...nel.org
Subject: Re: [PATCH bpf-next RESEND v4 1/2] bpf: refactor max_depth
computation in bpf_get_stack()
On 25/08/2025 19:27, Yonghong Song wrote:
>
>
> On 8/25/25 9:39 AM, Lecomte, Arnaud wrote:
>>
>> On 19/08/2025 22:15, Martin KaFai Lau wrote:
>>> On 8/19/25 9:26 AM, Arnaud Lecomte wrote:
>>>> A new helper function stack_map_calculate_max_depth() that
>>>> computes the max depth for a stackmap.
>>>>
>>>> Changes in v2:
>>>> - Removed the checking 'map_size % map_elem_size' from
>>>> stack_map_calculate_max_depth
>>>> - Changed stack_map_calculate_max_depth params name to be more
>>>> generic
>>>>
>>>> Changes in v3:
>>>> - Changed map size param to size in max depth helper
>>>>
>>>> Changes in v4:
>>>> - Fixed indentation in max depth helper for args
>>>>
>>>> Link to v3:
>>>> https://lore.kernel.org/all/09dc40eb-a84e-472a-8a68-36a2b1835308@linux.dev/
>>>>
>>>> Signed-off-by: Arnaud Lecomte <contact@...aud-lcm.com>
>>>> Acked-by: Yonghong Song <yonghong.song@...ux.dev>
>>>> ---
>>>> kernel/bpf/stackmap.c | 30 ++++++++++++++++++++++++------
>>>> 1 file changed, 24 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
>>>> index 3615c06b7dfa..b9cc6c72a2a5 100644
>>>> --- a/kernel/bpf/stackmap.c
>>>> +++ b/kernel/bpf/stackmap.c
>>>> @@ -42,6 +42,27 @@ static inline int stack_map_data_size(struct
>>>> bpf_map *map)
>>>> sizeof(struct bpf_stack_build_id) : sizeof(u64);
>>>> }
>>>> +/**
>>>> + * stack_map_calculate_max_depth - Calculate maximum allowed stack
>>>> trace depth
>>>> + * @size: Size of the buffer/map value in bytes
>>>> + * @elem_size: Size of each stack trace element
>>>> + * @flags: BPF stack trace flags (BPF_F_USER_STACK,
>>>> BPF_F_USER_BUILD_ID, ...)
>>>> + *
>>>> + * Return: Maximum number of stack trace entries that can be
>>>> safely stored
>>>> + */
>>>> +static u32 stack_map_calculate_max_depth(u32 size, u32 elem_size,
>>>> u64 flags)
>>>> +{
>>>> + u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
>>>> + u32 max_depth;
>>>> +
>>>> + max_depth = size / elem_size;
>>>> + max_depth += skip;
>>>> + if (max_depth > sysctl_perf_event_max_stack)
>>>> + return sysctl_perf_event_max_stack;
>>>
>>> hmm... this looks a bit suspicious. Is it possible that
>>> sysctl_perf_event_max_stack is being changed to a larger value in
>>> parallel?
>>>
>> Hi Martin, this is a valid concern as sysctl_perf_event_max_stack can
>> be modified at runtime through /proc/sys/kernel/perf_event_max_stack.
>> What we could maybe do instead is to create a copy: u32 current_max =
>> READ_ONCE(sysctl_perf_event_max_stack);
>> Any thoughts on this ?
>
> There is no need to have READ_ONCE. Jut do
> int curr_sysctl_max_stack = sysctl_perf_event_max_stack;
> if (max_depth > curr_sysctl_max_stack)
> return curr_sysctl_max_stack;
>
> Because of the above change, the patch is not a refactoring change any
> more.
>
Why would you not consider it as a refactoring change anymore ?
>>
>>>> +
>>>> + return max_depth;
>>>> +}
>>>> +
>>>> static int prealloc_elems_and_freelist(struct bpf_stack_map *smap)
>>>> {
>>>> u64 elem_size = sizeof(struct stack_map_bucket) +
>>>> @@ -406,7 +427,7 @@ static long __bpf_get_stack(struct pt_regs
>>>> *regs, struct task_struct *task,
>>>> struct perf_callchain_entry *trace_in,
>>>> void *buf, u32 size, u64 flags, bool may_fault)
>>>> {
>>>> - u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
>>>> + u32 trace_nr, copy_len, elem_size, max_depth;
>>>> bool user_build_id = flags & BPF_F_USER_BUILD_ID;
>>>> bool crosstask = task && task != current;
>>>> u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
>>>> @@ -438,10 +459,7 @@ static long __bpf_get_stack(struct pt_regs
>>>> *regs, struct task_struct *task,
>>>> goto clear;
>>>> }
>>>> - num_elem = size / elem_size;
>>>> - max_depth = num_elem + skip;
>>>> - if (sysctl_perf_event_max_stack < max_depth)
>>>> - max_depth = sysctl_perf_event_max_stack;
>>>> + max_depth = stack_map_calculate_max_depth(size, elem_size,
>>>> flags);
>>>> if (may_fault)
>>>> rcu_read_lock(); /* need RCU for perf's callchain below */
>>>> @@ -461,7 +479,7 @@ static long __bpf_get_stack(struct pt_regs
>>>> *regs, struct task_struct *task,
>>>> }
>>>> trace_nr = trace->nr - skip;
>>>> - trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem;
>>>
>>> I suspect it was fine because trace_nr was still bounded by num_elem.
>>>
>> We should bring back the num_elem bound as an additional safe net.
>>>> + trace_nr = min(trace_nr, max_depth - skip);
>>>
>>> but now the min() is also based on max_depth which could be
>>> sysctl_perf_event_max_stack.
>>>
>>> beside, if I read it correctly, in "max_depth - skip", the max_depth
>>> could also be less than skip. I assume trace->nr is bound by
>>> max_depth, so should be less of a problem but still a bit
>>> unintuitive to read.
>>>
>>>> copy_len = trace_nr * elem_size;
>>>> ips = trace->ip + skip;
>>>
>
>
Powered by blists - more mailing lists