[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4Bzak2rmSXQyTynRu3XPBemqbEmaxdUAFOQ-F5XRfZ7yOLg@mail.gmail.com>
Date: Thu, 24 Apr 2025 09:06:14 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Jiri Olsa <olsajiri@...il.com>
Cc: Oleg Nesterov <oleg@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Andrii Nakryiko <andrii@...nel.org>, bpf@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org, x86@...nel.org,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>, Hao Luo <haoluo@...gle.com>,
Steven Rostedt <rostedt@...dmis.org>, Masami Hiramatsu <mhiramat@...nel.org>,
Alan Maguire <alan.maguire@...cle.com>, David Laight <David.Laight@...lab.com>,
Thomas Weißschuh <thomas@...ch.de>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH perf/core 10/22] uprobes/x86: Add support to optimize uprobes
On Thu, Apr 24, 2025 at 5:49 AM Jiri Olsa <olsajiri@...il.com> wrote:
>
> On Tue, Apr 22, 2025 at 05:04:03PM -0700, Andrii Nakryiko wrote:
>
> SNIP
>
> > > arch/x86/include/asm/uprobes.h | 7 +
> > > arch/x86/kernel/uprobes.c | 281 ++++++++++++++++++++++++++++++++-
> > > include/linux/uprobes.h | 6 +-
> > > kernel/events/uprobes.c | 15 +-
> > > 4 files changed, 301 insertions(+), 8 deletions(-)
> > >
> >
> > just minor nits, LGTM
> >
> > Acked-by: Andrii Nakryiko <andrii@...nel.org>
> >
> > > +int set_swbp(struct arch_uprobe *auprobe, struct vm_area_struct *vma,
> > > + unsigned long vaddr)
> > > +{
> > > + if (should_optimize(auprobe)) {
> > > + bool optimized = false;
> > > + int err;
> > > +
> > > + /*
> > > + * We could race with another thread that already optimized the probe,
> > > + * so let's not overwrite it with int3 again in this case.
> > > + */
> > > + err = is_optimized(vma->vm_mm, vaddr, &optimized);
> > > + if (err || optimized)
> > > + return err;
> >
> > IMO, this is a bit too clever, I'd go with plain
> >
> > if (err)
> > return err;
> > if (optimized)
> > return 0; /* we are done */
> >
>
> ok
>
> > (and mirror set_orig_insn() structure, consistently)
>
> set_orig_insn does that already, right?
>
right, and that was my point
> >
> >
> > > + }
> > > + return uprobe_write_opcode(vma, vaddr, UPROBE_SWBP_INSN, true);
> > > +}
> > > +
> > > +int set_orig_insn(struct arch_uprobe *auprobe, struct vm_area_struct *vma,
> > > + unsigned long vaddr)
> > > +{
> > > + if (test_bit(ARCH_UPROBE_FLAG_CAN_OPTIMIZE, &auprobe->flags)) {
> > > + struct mm_struct *mm = vma->vm_mm;
> > > + bool optimized = false;
> > > + int err;
> > > +
> > > + err = is_optimized(mm, vaddr, &optimized);
> > > + if (err)
> > > + return err;
> > > + if (optimized)
> > > + WARN_ON_ONCE(swbp_unoptimize(auprobe, vma, vaddr));
> > > + }
> > > + return uprobe_write_opcode(vma, vaddr, *(uprobe_opcode_t *)&auprobe->insn, false);
> > > +}
> > > +
> > > +static int __arch_uprobe_optimize(struct mm_struct *mm, unsigned long vaddr)
> > > +{
> > > + struct uprobe_trampoline *tramp;
> > > + struct vm_area_struct *vma;
> > > + int err = 0;
> > > +
> > > + vma = find_vma(mm, vaddr);
> > > + if (!vma)
> > > + return -1;
> >
> > this is EPERM, will be confusing to debug... why not -EINVAL?
> >
> > > + tramp = uprobe_trampoline_get(vaddr);
> > > + if (!tramp)
> > > + return -1;
> >
> > ditto
>
> so the error value is not exposed to user space in this case,
> we try to optimize in the first hit with:
>
> handle_swbp()
> {
> arch_uprobe_optimize()
> {
>
> if (__arch_uprobe_optimize(mm, vaddr))
> set_bit(ARCH_UPROBE_FLAG_OPTIMIZE_FAIL, &auprobe->flags);
>
> }
> }
>
> and set ARCH_UPROBE_FLAG_OPTIMIZE_FAIL flags bit in case of error,
> plus there's WARN for swbp_optimize which should pass in case we
> get that far
yeah, I know, but I don't think we should deviate from kernel-wide
-Exxx convention for returning errors from functions just because this
error doesn't make it all the way to user space
>
> thanks,
> jirka
>
> >
> > > + err = swbp_optimize(vma, vaddr, tramp->vaddr);
> > > + if (WARN_ON_ONCE(err))
> > > + uprobe_trampoline_put(tramp);
> > > + return err;
> > > +}
> > > +
> >
> > [...]
Powered by blists - more mailing lists