[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQJeWj2t9XSRxK5NU99GJsOBnropoOOohDNPj7N2xZFGEQ@mail.gmail.com>
Date: Wed, 30 Oct 2024 07:59:51 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Jordan Rife <jrife@...gle.com>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Andrii Nakryiko <andrii.nakryiko@...il.com>, Alexei Starovoitov <ast@...nel.org>, bpf <bpf@...r.kernel.org>,
Joel Fernandes <joel@...lfernandes.org>, LKML <linux-kernel@...r.kernel.org>,
Mark Rutland <mark.rutland@....com>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Masami Hiramatsu <mhiramat@...nel.org>, Ingo Molnar <mingo@...hat.com>,
Michael Jeanson <mjeanson@...icios.com>, Namhyung Kim <namhyung@...nel.org>,
"Paul E. McKenney" <paulmck@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>, Thomas Gleixner <tglx@...utronix.de>, Yonghong Song <yhs@...com>
Subject: Re: [RFC PATCH v4 4/4] tracing: Add might_fault() check in
__DO_TRACE() for syscall
On Mon, Oct 28, 2024 at 5:28 PM Jordan Rife <jrife@...gle.com> wrote:
>
>
> 1. Applied my patch from [1] to prevent any failures resulting from the
> as-of-yet unpatched BPF code that uses call_rcu(). This lets us
...
> [1]: https://lore.kernel.org/bpf/20241023145640.1499722-1-jrife@google.com/
> [2]: https://lore.kernel.org/bpf/67121037.050a0220.10f4f4.000f.GAE@google.com/
> [3]: https://syzkaller.appspot.com/x/repro.syz?x=153ef887980000
>
>
> [ 687.323615][T16276] ==================================================================
> [ 687.325235][T16276] BUG: KFENCE: use-after-free read in __traceiter_sys_enter+0x30/0x50
> [ 687.325235][T16276]
> [ 687.327193][T16276] Use-after-free read at 0xffff88807ec60028 (in kfence-#47):
> [ 687.328404][T16276] __traceiter_sys_enter+0x30/0x50
> [ 687.329338][T16276] syscall_trace_enter+0x1ea/0x2b0
> [ 687.330021][T16276] do_syscall_64+0x1ec/0x250
> [ 687.330816][T16276] entry_SYSCALL_64_after_hwframe+0x77/0x7f
> [ 687.331826][T16276]
> [ 687.332291][T16276] kfence-#47: 0xffff88807ec60000-0xffff88807ec60057, size=88, cache=kmalloc-96
> [ 687.332291][T16276]
> [ 687.334265][T16276] allocated by task 16281 on cpu 1 at 683.953385s (3.380878s ago):
> [ 687.335615][T16276] tracepoint_add_func+0x28a/0xd90
> [ 687.336424][T16276] tracepoint_probe_register_prio_may_exist+0xa2/0xf0
> [ 687.337416][T16276] bpf_probe_register+0x186/0x200
> [ 687.338174][T16276] bpf_raw_tp_link_attach+0x21f/0x540
> [ 687.339233][T16276] __sys_bpf+0x393/0x4fa0
> [ 687.340042][T16276] __x64_sys_bpf+0x78/0xc0
> [ 687.340801][T16276] do_syscall_64+0xcb/0x250
> [ 687.341623][T16276] entry_SYSCALL_64_after_hwframe+0x77/0x7f
I think the stack trace points out that the patch [1] isn't really fixing it.
UAF is on access to bpf_link in __traceiter_sys_enter
while your patch [1] and all attempts to "fix" were delaying bpf_prog.
The issue is not reproducing anymore due to luck.
Powered by blists - more mailing lists