[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <da0959e7-a91c-ab4c-56be-3c3cd280e592@iogearbox.net>
Date: Tue, 1 Nov 2022 14:52:16 +0100
From: Daniel Borkmann <daniel@...earbox.net>
To: Kees Cook <keescook@...omium.org>,
Alexei Starovoitov <ast@...nel.org>
Cc: John Fastabend <john.fastabend@...il.com>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Song Liu <song@...nel.org>, Yonghong Song <yhs@...com>,
KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...gle.com>,
Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
bpf@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-hardening@...r.kernel.org
Subject: Re: [PATCH bpf-next v2 2/3] bpf/verifier: Use kmalloc_size_roundup()
to match ksize() usage
On 10/29/22 4:54 AM, Kees Cook wrote:
> Round up allocations with kmalloc_size_roundup() so that the verifier's
> use of ksize() is always accurate and no special handling of the memory
> is needed by KASAN, UBSAN_BOUNDS, nor FORTIFY_SOURCE. Pass the new size
> information back up to callers so they can use the space immediately,
> so array resizing to happen less frequently as well.
>
[...]
The commit message is a bit cryptic here without further context. Is this
a bug fix or improvement? I read the latter, but it would be good to have
more context here for reviewers (maybe Link tag pointing to some discussion
or the like). Also, why is the kmalloc_size_roundup() not hidden for kmalloc
callers, isn't this a tree-wide issue?
Thanks,
Daniel
> kernel/bpf/verifier.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index eb8c34db74c7..1c040d27b8f6 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1008,9 +1008,9 @@ static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
> if (unlikely(check_mul_overflow(n, size, &bytes)))
> return NULL;
>
> - if (ksize(dst) < bytes) {
> + if (ksize(dst) < ksize(src)) {
> kfree(dst);
> - dst = kmalloc_track_caller(bytes, flags);
> + dst = kmalloc_track_caller(kmalloc_size_roundup(bytes), flags);
> if (!dst)
> return NULL;
> }
> @@ -1027,12 +1027,14 @@ static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
> */
> static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size)
> {
> + size_t alloc_size;
> void *new_arr;
>
> if (!new_n || old_n == new_n)
> goto out;
>
> - new_arr = krealloc_array(arr, new_n, size, GFP_KERNEL);
> + alloc_size = kmalloc_size_roundup(size_mul(new_n, size));
> + new_arr = krealloc(arr, alloc_size, GFP_KERNEL);
> if (!new_arr) {
> kfree(arr);
> return NULL;
> @@ -2504,9 +2506,11 @@ static int push_jmp_history(struct bpf_verifier_env *env,
> {
> u32 cnt = cur->jmp_history_cnt;
> struct bpf_idx_pair *p;
> + size_t alloc_size;
>
> cnt++;
> - p = krealloc(cur->jmp_history, cnt * sizeof(*p), GFP_USER);
> + alloc_size = kmalloc_size_roundup(size_mul(cnt, sizeof(*p)));
> + p = krealloc(cur->jmp_history, alloc_size, GFP_USER);
> if (!p)
> return -ENOMEM;
> p[cnt - 1].idx = env->insn_idx;
>
Powered by blists - more mailing lists