[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1503e9c4-7150-3244-4710-7b6b2d59e0da@fb.com>
Date: Wed, 28 Jul 2021 12:13:17 -0700
From: Yonghong Song <yhs@...com>
To: Johan Almbladh <johan.almbladh@...finetworks.com>,
<ast@...nel.org>, <daniel@...earbox.net>, <andrii@...nel.org>
CC: <kafai@...com>, <songliubraving@...com>,
<john.fastabend@...il.com>, <kpsingh@...nel.org>,
<Tony.Ambardar@...il.com>, <netdev@...r.kernel.org>,
<bpf@...r.kernel.org>
Subject: Re: [PATCH] bpf: Fix off-by-one in tail call count limiting
On 7/28/21 9:47 AM, Johan Almbladh wrote:
> Before, the interpreter allowed up to MAX_TAIL_CALL_CNT + 1 tail calls.
> Now precisely MAX_TAIL_CALL_CNT is allowed, which is in line with the
> behavior of the x86 JITs.
>
> Signed-off-by: Johan Almbladh <johan.almbladh@...finetworks.com>
LGTM.
Acked-by: Yonghong Song <yhs@...com>
I also checked arm/arm64 jit. I saw the following comments:
/* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
* goto out;
* tail_call_cnt++;
*/
Maybe we have this MAX_TAIL_CALL_CNT + 1 issue
for arm/arm64 jit?
> ---
> kernel/bpf/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 9b1577498373..67682b3afc84 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -1559,7 +1559,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
>
> if (unlikely(index >= array->map.max_entries))
> goto out;
> - if (unlikely(tail_call_cnt > MAX_TAIL_CALL_CNT))
> + if (unlikely(tail_call_cnt >= MAX_TAIL_CALL_CNT))
> goto out;
>
> tail_call_cnt++;
>
Powered by blists - more mailing lists