[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aLlKJWRs5etuvFuK@krava>
Date: Thu, 4 Sep 2025 10:13:25 +0200
From: Jiri Olsa <olsajiri@...il.com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc: Oleg Nesterov <oleg@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Andrii Nakryiko <andrii@...nel.org>, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
x86@...nel.org, Song Liu <songliubraving@...com>,
Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
Hao Luo <haoluo@...gle.com>, Steven Rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Alan Maguire <alan.maguire@...cle.com>,
David Laight <David.Laight@...lab.com>,
Thomas Weißschuh <thomas@...ch.de>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCHv6 perf/core 09/22] uprobes/x86: Add uprobe syscall to
speed up uprobe
On Wed, Sep 03, 2025 at 11:24:31AM -0700, Andrii Nakryiko wrote:
> On Sun, Jul 20, 2025 at 4:23 AM Jiri Olsa <jolsa@...nel.org> wrote:
> >
> > Adding new uprobe syscall that calls uprobe handlers for given
> > 'breakpoint' address.
> >
> > The idea is that the 'breakpoint' address calls the user space
> > trampoline which executes the uprobe syscall.
> >
> > The syscall handler reads the return address of the initial call
> > to retrieve the original 'breakpoint' address. With this address
> > we find the related uprobe object and call its consumers.
> >
> > Adding the arch_uprobe_trampoline_mapping function that provides
> > uprobe trampoline mapping. This mapping is backed with one global
> > page initialized at __init time and shared by the all the mapping
> > instances.
> >
> > We do not allow to execute uprobe syscall if the caller is not
> > from uprobe trampoline mapping.
> >
> > The uprobe syscall ensures the consumer (bpf program) sees registers
> > values in the state before the trampoline was called.
> >
> > Acked-by: Andrii Nakryiko <andrii@...nel.org>
> > Signed-off-by: Jiri Olsa <jolsa@...nel.org>
> > ---
> > arch/x86/entry/syscalls/syscall_64.tbl | 1 +
> > arch/x86/kernel/uprobes.c | 139 +++++++++++++++++++++++++
> > include/linux/syscalls.h | 2 +
> > include/linux/uprobes.h | 1 +
> > kernel/events/uprobes.c | 17 +++
> > kernel/sys_ni.c | 1 +
> > 6 files changed, 161 insertions(+)
> >
> > diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
> > index cfb5ca41e30d..9fd1291e7bdf 100644
> > --- a/arch/x86/entry/syscalls/syscall_64.tbl
> > +++ b/arch/x86/entry/syscalls/syscall_64.tbl
> > @@ -345,6 +345,7 @@
> > 333 common io_pgetevents sys_io_pgetevents
> > 334 common rseq sys_rseq
> > 335 common uretprobe sys_uretprobe
> > +336 common uprobe sys_uprobe
> > # don't use numbers 387 through 423, add new calls after the last
> > # 'common' entry
> > 424 common pidfd_send_signal sys_pidfd_send_signal
> > diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
> > index 6c4dcbdd0c3c..d18e1ae59901 100644
> > --- a/arch/x86/kernel/uprobes.c
> > +++ b/arch/x86/kernel/uprobes.c
> > @@ -752,6 +752,145 @@ void arch_uprobe_clear_state(struct mm_struct *mm)
> > hlist_for_each_entry_safe(tramp, n, &state->head_tramps, node)
> > destroy_uprobe_trampoline(tramp);
> > }
> > +
> > +static bool __in_uprobe_trampoline(unsigned long ip)
> > +{
> > + struct vm_area_struct *vma = vma_lookup(current->mm, ip);
> > +
> > + return vma && vma_is_special_mapping(vma, &tramp_mapping);
> > +}
> > +
> > +static bool in_uprobe_trampoline(unsigned long ip)
> > +{
> > + struct mm_struct *mm = current->mm;
> > + bool found, retry = true;
> > + unsigned int seq;
> > +
> > + rcu_read_lock();
> > + if (mmap_lock_speculate_try_begin(mm, &seq)) {
> > + found = __in_uprobe_trampoline(ip);
> > + retry = mmap_lock_speculate_retry(mm, seq);
> > + }
> > + rcu_read_unlock();
> > +
> > + if (retry) {
> > + mmap_read_lock(mm);
> > + found = __in_uprobe_trampoline(ip);
> > + mmap_read_unlock(mm);
> > + }
> > + return found;
> > +}
> > +
> > +/*
> > + * See uprobe syscall trampoline; the call to the trampoline will push
> > + * the return address on the stack, the trampoline itself then pushes
> > + * cx, r11 and ax.
> > + */
> > +struct uprobe_syscall_args {
> > + unsigned long ax;
> > + unsigned long r11;
> > + unsigned long cx;
> > + unsigned long retaddr;
> > +};
> > +
> > +SYSCALL_DEFINE0(uprobe)
> > +{
> > + struct pt_regs *regs = task_pt_regs(current);
> > + struct uprobe_syscall_args args;
> > + unsigned long ip, sp;
> > + int err;
> > +
> > + /* Allow execution only from uprobe trampolines. */
> > + if (!in_uprobe_trampoline(regs->ip))
> > + goto sigill;
>
> Hey Jiri,
>
> So I've been thinking what's the simplest and most reliable way to
> feature-detect support for this sys_uprobe (e.g., for libbpf to know
> whether we should attach at nop5 vs nop1), and clearly that would be
wrt nop5/nop1.. so the idea is to have USDT macro emit both nop1,nop5
and store some info about that in the usdt's elf note, right?
libbpf will read usdt record and in case it has both nop1/nop5 and if
the sys_uprobe is detected, we will adjust usdt address to nop1 or nop5
I recall you said you might have an idea where to store this flag
in elf note.. or are we bumping the usdt's elf note n_type ?
thanks,
jirka
> to try to call uprobe() syscall not from trampoline, and expect some
> error code.
>
> How bad would it be to change this part to return some unique-enough
> error code (-ENXIO, -EDOM, whatever).
>
> Is there any reason not to do this? Security-wise it will be just fine, right?
>
> > +
> > + err = copy_from_user(&args, (void __user *)regs->sp, sizeof(args));
> > + if (err)
> > + goto sigill;
> > +
> > + ip = regs->ip;
> > +
>
> [...]
Powered by blists - more mailing lists