[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez3i5haHCc8EQMVNjKnd9xYwMcp4sbW_Y8DRpJCidJotjw@mail.gmail.com>
Date: Fri, 6 Dec 2024 23:25:14 +0100
From: Jann Horn <jannh@...gle.com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc: Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>, Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>, Eduard Zingerman <eddyz87@...il.com>, Song Liu <song@...nel.org>,
Yonghong Song <yonghong.song@...ux.dev>, KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...ichev.me>, Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>, Masami Hiramatsu <mhiramat@...nel.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Delyan Kratunov <delyank@...com>, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [PATCH bpf v2] bpf: Fix prog_array UAF in __uprobe_perf_func()
On Fri, Dec 6, 2024 at 11:15 PM Andrii Nakryiko
<andrii.nakryiko@...il.com> wrote:
> On Fri, Dec 6, 2024 at 12:45 PM Jann Horn <jannh@...gle.com> wrote:
> >
> > Currently, the pointer stored in call->prog_array is loaded in
> > __uprobe_perf_func(), with no RCU annotation and no RCU protection, so the
> > loaded pointer can immediately be dangling. Later,
> > bpf_prog_run_array_uprobe() starts a RCU-trace read-side critical section,
> > but this is too late. It then uses rcu_dereference_check(), but this use of
> > rcu_dereference_check() does not actually dereference anything.
> >
> > It looks like the intention was to pass a pointer to the member
> > call->prog_array into bpf_prog_run_array_uprobe() and actually dereference
> > the pointer in there. Fix the issue by actually doing that.
> >
> > Fixes: 8c7dcb84e3b7 ("bpf: implement sleepable uprobes by chaining gps")
> > Cc: stable@...r.kernel.org
> > Signed-off-by: Jann Horn <jannh@...gle.com>
> > ---
> > To reproduce, in include/linux/bpf.h, patch in a mdelay(10000) directly
> > before the might_fault() in bpf_prog_run_array_uprobe() and add an
> > include of linux/delay.h.
> >
> > Build this userspace program:
> >
> > ```
> > $ cat dummy.c
> > #include <stdio.h>
> > int main(void) {
> > printf("hello world\n");
> > }
> > $ gcc -o dummy dummy.c
> > ```
> >
> > Then build this BPF program and load it (change the path to point to
> > the "dummy" binary you built):
> >
> > ```
> > $ cat bpf-uprobe-kern.c
> > #include <linux/bpf.h>
> > #include <bpf/bpf_helpers.h>
> > #include <bpf/bpf_tracing.h>
> > char _license[] SEC("license") = "GPL";
> >
> > SEC("uprobe//home/user/bpf-uprobe-uaf/dummy:main")
> > int BPF_UPROBE(main_uprobe) {
> > bpf_printk("main uprobe triggered\n");
> > return 0;
> > }
> > $ clang -O2 -g -target bpf -c -o bpf-uprobe-kern.o bpf-uprobe-kern.c
> > $ sudo bpftool prog loadall bpf-uprobe-kern.o uprobe-test autoattach
> > ```
> >
> > Then run ./dummy in one terminal, and after launching it, run
> > `sudo umount uprobe-test` in another terminal. Once the 10-second
> > mdelay() is over, a use-after-free should occur, which may or may
> > not crash your kernel at the `prog->sleepable` check in
> > bpf_prog_run_array_uprobe() depending on your luck.
> > ---
> > Changes in v2:
> > - remove diff chunk in patch notes that confuses git
> > - Link to v1: https://lore.kernel.org/r/20241206-bpf-fix-uprobe-uaf-v1-1-6869c8a17258@google.com
> > ---
> > include/linux/bpf.h | 4 ++--
> > kernel/trace/trace_uprobe.c | 2 +-
> > 2 files changed, 3 insertions(+), 3 deletions(-)
> >
>
> Looking at how similar in spirit bpf_prog_run_array() is meant to be
> used, it seems like it is the caller's responsibility to
> RCU-dereference array and keep RCU critical section before calling
> into bpf_prog_run_array(). So I wonder if it's best to do this instead
> (Gmail will butcher the diff, but it's about the idea):
Yeah, that's the other option I was considering. That would be more
consistent with the existing bpf_prog_run_array(), but has the
downside of unnecessarily pushing responsibility up to the caller...
I'm fine with either.
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index eaee2a819f4c..4b8a9edd3727 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2193,26 +2193,25 @@ bpf_prog_run_array(const struct bpf_prog_array *array,
> * rcu-protected dynamically sized maps.
> */
> static __always_inline u32
> -bpf_prog_run_array_uprobe(const struct bpf_prog_array __rcu *array_rcu,
> +bpf_prog_run_array_uprobe(const struct bpf_prog_array *array,
> const void *ctx, bpf_prog_run_fn run_prog)
> {
> const struct bpf_prog_array_item *item;
> const struct bpf_prog *prog;
> - const struct bpf_prog_array *array;
> struct bpf_run_ctx *old_run_ctx;
> struct bpf_trace_run_ctx run_ctx;
> u32 ret = 1;
>
> might_fault();
> + RCU_LOCKDEP_WARN(!rcu_read_lock_trace_held(), "no rcu lock held");
> +
> + if (unlikely(!array))
> + goto out;
>
> - rcu_read_lock_trace();
> migrate_disable();
>
> run_ctx.is_uprobe = true;
>
> - array = rcu_dereference_check(array_rcu, rcu_read_lock_trace_held());
> - if (unlikely(!array))
> - goto out;
> old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
> item = &array->items[0];
> while ((prog = READ_ONCE(item->prog))) {
> @@ -2229,7 +2228,6 @@ bpf_prog_run_array_uprobe(const struct
> bpf_prog_array __rcu *array_rcu,
> bpf_reset_run_ctx(old_run_ctx);
> out:
> migrate_enable();
> - rcu_read_unlock_trace();
> return ret;
> }
>
> diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
> index fed382b7881b..87a2b8fefa90 100644
> --- a/kernel/trace/trace_uprobe.c
> +++ b/kernel/trace/trace_uprobe.c
> @@ -1404,7 +1404,9 @@ static void __uprobe_perf_func(struct trace_uprobe *tu,
> if (bpf_prog_array_valid(call)) {
> u32 ret;
>
> + rcu_read_lock_trace();
> ret = bpf_prog_run_array_uprobe(call->prog_array,
> regs, bpf_prog_run);
But then this should be something like this (possibly split across
multiple lines with a helper variable or such):
ret = bpf_prog_run_array_uprobe(rcu_dereference_check(call->prog_array,
rcu_read_lock_trace_held()), regs, bpf_prog_run);
> + rcu_read_unlock_trace();
> if (!ret)
> return;
> }
>
>
Powered by blists - more mailing lists