[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aMncCwre1QwJTNcL@krava>
Date: Tue, 16 Sep 2025 23:52:11 +0200
From: Jiri Olsa <olsajiri@...il.com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc: Oleg Nesterov <oleg@...hat.com>, Masami Hiramatsu <mhiramat@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Andrii Nakryiko <andrii@...nel.org>, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
x86@...nel.org, Song Liu <songliubraving@...com>,
Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
Hao Luo <haoluo@...gle.com>, Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCHv3 perf/core 1/6] bpf: Allow uprobe program to change
context registers
On Tue, Sep 09, 2025 at 12:41:36PM -0400, Andrii Nakryiko wrote:
> On Tue, Sep 9, 2025 at 8:39 AM Jiri Olsa <jolsa@...nel.org> wrote:
> >
> > Currently uprobe (BPF_PROG_TYPE_KPROBE) program can't write to the
> > context registers data. While this makes sense for kprobe attachments,
> > for uprobe attachment it might make sense to be able to change user
> > space registers to alter application execution.
> >
> > Since uprobe and kprobe programs share the same type (BPF_PROG_TYPE_KPROBE),
> > we can't deny write access to context during the program load. We need
> > to check on it during program attachment to see if it's going to be
> > kprobe or uprobe.
> >
> > Storing the program's write attempt to context and checking on it
> > during the attachment.
> >
> > Signed-off-by: Jiri Olsa <jolsa@...nel.org>
> > ---
> > include/linux/bpf.h | 1 +
> > kernel/events/core.c | 4 ++++
> > kernel/trace/bpf_trace.c | 7 +++++--
> > 3 files changed, 10 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index cc700925b802..404a30cde84e 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -1619,6 +1619,7 @@ struct bpf_prog_aux {
> > bool priv_stack_requested;
> > bool changes_pkt_data;
> > bool might_sleep;
> > + bool kprobe_write_ctx;
> > u64 prog_array_member_cnt; /* counts how many times as member of prog_array */
> > struct mutex ext_mutex; /* mutex for is_extended and prog_array_member_cnt */
> > struct bpf_arena *arena;
> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index 28de3baff792..c3f37b266fc4 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -11238,6 +11238,10 @@ static int __perf_event_set_bpf_prog(struct perf_event *event,
> > if (prog->kprobe_override && !is_kprobe)
> > return -EINVAL;
> >
> > + /* Writing to context allowed only for uprobes. */
> > + if (prog->aux->kprobe_write_ctx && !is_uprobe)
> > + return -EINVAL;
> > +
> > if (is_tracepoint || is_syscall_tp) {
> > int off = trace_event_get_offsets(event->tp_event);
> >
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index 3ae52978cae6..dfb19e773afa 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -1521,8 +1521,6 @@ static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type
> > {
> > if (off < 0 || off >= sizeof(struct pt_regs))
> > return false;
> > - if (type != BPF_READ)
> > - return false;
> > if (off % size != 0)
> > return false;
> > /*
> > @@ -1532,6 +1530,7 @@ static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type
> > if (off + size > sizeof(struct pt_regs))
> > return false;
> >
> > + prog->aux->kprobe_write_ctx |= type == BPF_WRITE;
>
> nit: minor preference for
>
> if (type == BPF_WRITE)
> prog->aux->kprobe_write_ctx = true;
ok, will change
jirka
>
>
> > return true;
> > }
> >
> > @@ -2913,6 +2912,10 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
> > if (!is_kprobe_multi(prog))
> > return -EINVAL;
> >
> > + /* Writing to context is not allowed for kprobes. */
> > + if (prog->aux->kprobe_write_ctx)
> > + return -EINVAL;
> > +
> > flags = attr->link_create.kprobe_multi.flags;
> > if (flags & ~BPF_F_KPROBE_MULTI_RETURN)
> > return -EINVAL;
> > --
> > 2.51.0
> >
Powered by blists - more mailing lists