[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2339465e-1f87-595a-2954-eb92b6bfa9cc@huawei.com>
Date: Tue, 15 Feb 2022 12:29:42 +0800
From: Hou Tao <houtao1@...wei.com>
To: Yonghong Song <yhs@...com>, Alexei Starovoitov <ast@...nel.org>
CC: Martin KaFai Lau <kafai@...com>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Song Liu <songliubraving@...com>,
John Fastabend <john.fastabend@...il.com>,
<netdev@...r.kernel.org>, <bpf@...r.kernel.org>
Subject: Re: [PATCH bpf-next v3] bpf: reject kfunc calls that overflow
insn->imm
Hi,
On 2/9/2022 11:42 PM, Yonghong Song wrote:
>
>
> On 2/9/22 1:11 AM, Hou Tao wrote:
>> Now kfunc call uses s32 to represent the offset between the address
>> of kfunc and __bpf_call_base, but it doesn't check whether or not
>> s32 will be overflowed, so add an extra checking to reject these
>> invalid kfunc calls.
>>
>> Signed-off-by: Hou Tao <houtao1@...wei.com>
>
> The patch itself looks good. But the commit message
> itself doesn't specify whether this is a theoretical case or
> could really happen in practice. I look at the patch history,
> and find the become commit message in v1 of the patch ([1]):
>
> > Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
> > randomization range to 2 GB"), for arm64 whether KASLR is enabled
> > or not, the module is placed within 2GB of the kernel region, so
> > s32 in bpf_kfunc_desc is sufficient to represente the offset of
> > module function relative to __bpf_call_base. The only thing needed
> > is to override bpf_jit_supports_kfunc_call().
>
> So it does look like the overflow is possible.
>
> So I suggest you add more description on *when* the overflow
> may happen in this patch.
Will do in v5.
>
> And you can also retain your previous selftest patch to test
> this verifier change.
Is it necessary ? IMO it is just duplication of the newly-added logic.
Regards,
Tao
>
> [1] https://lore.kernel.org/bpf/20220119144942.305568-1-houtao1@huawei.com/
>
>> ---
>> v3:
>> * call BPF_CALL_IMM() once (suggested by Yonghong)
>>
>> v2: https://lore.kernel.org/bpf/20220208123348.40360-1-houtao1@huawei.com
>> * instead of checking the overflow in selftests, just reject
>> these kfunc calls directly in verifier
>>
>> v1: https://lore.kernel.org/bpf/20220206043107.18549-1-houtao1@huawei.com
>> ---
>> kernel/bpf/verifier.c | 11 ++++++++++-
>> 1 file changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 1ae41d0cf96c..eb72e6139e2b 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -1842,6 +1842,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env,
>> u32 func_id, s16 offset)
>> struct bpf_kfunc_desc *desc;
>> const char *func_name;
>> struct btf *desc_btf;
>> + unsigned long call_imm;
>> unsigned long addr;
>> int err;
>> @@ -1926,9 +1927,17 @@ static int add_kfunc_call(struct bpf_verifier_env
>> *env, u32 func_id, s16 offset)
>> return -EINVAL;
>> }
>> + call_imm = BPF_CALL_IMM(addr);
>> + /* Check whether or not the relative offset overflows desc->imm */
>> + if ((unsigned long)(s32)call_imm != call_imm) {
>> + verbose(env, "address of kernel function %s is out of range\n",
>> + func_name);
>> + return -EINVAL;
>> + }
>> +
>> desc = &tab->descs[tab->nr_descs++];
>> desc->func_id = func_id;
>> - desc->imm = BPF_CALL_IMM(addr);
>> + desc->imm = call_imm;
>> desc->offset = offset;
>> err = btf_distill_func_proto(&env->log, desc_btf,
>> func_proto, func_name,
> .
Powered by blists - more mailing lists