[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQKuBOc-jqaK1H5Usb6PKFWdbBoo8tzVOU2jzXwa1ENd0g@mail.gmail.com>
Date: Tue, 27 Apr 2021 18:10:32 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Jiri Olsa <jolsa@...nel.org>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andriin@...com>,
Network Development <netdev@...r.kernel.org>,
bpf <bpf@...r.kernel.org>, Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...omium.org>
Subject: Re: [PATCH] bpf: Fix recursion check in trampoline
On Tue, Apr 27, 2021 at 3:42 PM Jiri Olsa <jolsa@...nel.org> wrote:
>
> The recursion check in __bpf_prog_enter and __bpf_prog_exit leaves
> some (not inlined) functions unprotected:
>
> In __bpf_prog_enter:
> - migrate_disable is called before prog->active is checked
>
> In __bpf_prog_exit:
> - migrate_enable,rcu_read_unlock_strict are called after
> prog->active is decreased
>
> When attaching trampoline to them we get panic like:
>
> traps: PANIC: double fault, error_code: 0x0
> double fault: 0000 [#1] SMP PTI
> RIP: 0010:__bpf_prog_enter+0x4/0x50
> ...
> Call Trace:
> <IRQ>
> bpf_trampoline_6442466513_0+0x18/0x1000
> migrate_disable+0x5/0x50
> __bpf_prog_enter+0x9/0x50
> bpf_trampoline_6442466513_0+0x18/0x1000
> migrate_disable+0x5/0x50
> __bpf_prog_enter+0x9/0x50
> bpf_trampoline_6442466513_0+0x18/0x1000
> migrate_disable+0x5/0x50
> __bpf_prog_enter+0x9/0x50
> bpf_trampoline_6442466513_0+0x18/0x1000
> migrate_disable+0x5/0x50
> ...
>
> Making the recursion check before the rest of the calls
> in __bpf_prog_enter and as last call in __bpf_prog_exit.
>
> Signed-off-by: Jiri Olsa <jolsa@...nel.org>
> ---
> kernel/bpf/trampoline.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> index 4aa8b52adf25..301735f7e88e 100644
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
> @@ -558,12 +558,12 @@ static void notrace inc_misses_counter(struct bpf_prog *prog)
> u64 notrace __bpf_prog_enter(struct bpf_prog *prog)
> __acquires(RCU)
> {
> - rcu_read_lock();
> - migrate_disable();
> if (unlikely(__this_cpu_inc_return(*(prog->active)) != 1)) {
> inc_misses_counter(prog);
> return 0;
> }
> + rcu_read_lock();
> + migrate_disable();
That obviously doesn't work.
After cpu_inc the task can migrate and cpu_dec
will happen on a different cpu likely underflowing
the counter into negative.
We can either mark migrate_disable as nokprobe/notrace or have bpf
trampoline specific denylist.
Powered by blists - more mailing lists