[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <afcb878e-d233-4c87-a0fc-803612c8c91f@rosa.ru>
Date: Fri, 7 Nov 2025 09:58:16 +0300
From: Алексей Сафин <a.safin@...a.ru>
To: Yafang Shao <laoar.shao@...il.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>, Eduard Zingerman
<eddyz87@...il.com>, Song Liu <song@...nel.org>,
Yonghong Song <yonghong.song@...ux.dev>,
John Fastabend <john.fastabend@...il.com>, KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...ichev.me>, Hao Luo <haoluo@...gle.com>,
Jiri Olsa <jolsa@...nel.org>, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, lvc-patches@...uxtesting.org,
stable@...r.kernel.org
Subject: Re: [PATCH] bpf: hashtab: fix 32-bit overflow in memory usage
calculation
Yes, that looks even better to me. Changing value_size to u64 at declaration
makes the arithmetic safe everywhere and keeps the code cleaner.
I agree with this version.
Should I prepare a v2 patch with this modification, or will you take it
from here?
07.11.2025 04:58, Yafang Shao пишет:
> On Fri, Nov 7, 2025 at 4:59 AM Alexei Safin <a.safin@...a.ru> wrote:
>> The intermediate product value_size * num_possible_cpus() is evaluated
>> in 32-bit arithmetic and only then promoted to 64 bits. On systems with
>> large value_size and many possible CPUs this can overflow and lead to
>> an underestimated memory usage.
>>
>> Cast value_size to u64 before multiplying.
>>
>> Found by Linux Verification Center (linuxtesting.org) with SVACE.
>>
>> Fixes: 304849a27b34 ("bpf: hashtab memory usage")
>> Cc: stable@...r.kernel.org
>> Signed-off-by: Alexei Safin <a.safin@...a.ru>
>> ---
>> kernel/bpf/hashtab.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
>> index 570e2f723144..7ad6b5137ba1 100644
>> --- a/kernel/bpf/hashtab.c
>> +++ b/kernel/bpf/hashtab.c
>> @@ -2269,7 +2269,7 @@ static u64 htab_map_mem_usage(const struct bpf_map *map)
>> usage += htab->elem_size * num_entries;
>>
>> if (percpu)
>> - usage += value_size * num_possible_cpus() * num_entries;
>> + usage += (u64)value_size * num_possible_cpus() * num_entries;
>> else if (!lru)
>> usage += sizeof(struct htab_elem *) * num_possible_cpus();
>> } else {
>> @@ -2281,7 +2281,7 @@ static u64 htab_map_mem_usage(const struct bpf_map *map)
>> usage += (htab->elem_size + LLIST_NODE_SZ) * num_entries;
>> if (percpu) {
>> usage += (LLIST_NODE_SZ + sizeof(void *)) * num_entries;
>> - usage += value_size * num_possible_cpus() * num_entries;
>> + usage += (u64)value_size * num_possible_cpus() * num_entries;
>> }
>> }
>> return usage;
>> --
>> 2.50.1 (Apple Git-155)
>>
> Thanks for the fix. What do you think about this change?
>
> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> index 4a9eeb7aef85..f9084158bfe2 100644
> --- a/kernel/bpf/hashtab.c
> +++ b/kernel/bpf/hashtab.c
> @@ -2251,7 +2251,7 @@ static long bpf_for_each_hash_elem(struct
> bpf_map *map, bpf_callback_t callback_
> static u64 htab_map_mem_usage(const struct bpf_map *map)
> {
> struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
> - u32 value_size = round_up(htab->map.value_size, 8);
> + u64 value_size = round_up(htab->map.value_size, 8);
> bool prealloc = htab_is_prealloc(htab);
> bool percpu = htab_is_percpu(htab);
> bool lru = htab_is_lru(htab);
>
>
Powered by blists - more mailing lists