[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aKW9Q0cOhNL0XV0R@krava>
Date: Wed, 20 Aug 2025 14:19:15 +0200
From: Jiri Olsa <olsajiri@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Oleg Nesterov <oleg@...hat.com>, Andrii Nakryiko <andrii@...nel.org>,
bpf@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org, x86@...nel.org,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
Hao Luo <haoluo@...gle.com>, Steven Rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Alan Maguire <alan.maguire@...cle.com>,
David Laight <David.Laight@...lab.com>,
Thomas Weißschuh <thomas@...ch.de>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCHv6 perf/core 10/22] uprobes/x86: Add support to optimize
uprobes
On Tue, Aug 19, 2025 at 09:15:15PM +0200, Peter Zijlstra wrote:
> On Sun, Jul 20, 2025 at 01:21:20PM +0200, Jiri Olsa wrote:
>
> > +static bool __is_optimized(uprobe_opcode_t *insn, unsigned long vaddr)
> > +{
> > + struct __packed __arch_relative_insn {
> > + u8 op;
> > + s32 raddr;
> > + } *call = (struct __arch_relative_insn *) insn;
>
> Not something you need to clean up now I suppose, but we could do with
> unifying this thing. we have a bunch of instances around.
ok, I noticed, will send patch for that
>
> > +
> > + if (!is_call_insn(insn))
> > + return false;
> > + return __in_uprobe_trampoline(vaddr + 5 + call->raddr);
> > +}
>
> > +void arch_uprobe_optimize(struct arch_uprobe *auprobe, unsigned long vaddr)
> > +{
> > + struct mm_struct *mm = current->mm;
> > + uprobe_opcode_t insn[5];
> > +
> > + /*
> > + * Do not optimize if shadow stack is enabled, the return address hijack
> > + * code in arch_uretprobe_hijack_return_addr updates wrong frame when
> > + * the entry uprobe is optimized and the shadow stack crashes the app.
> > + */
> > + if (shstk_is_enabled())
> > + return;
>
> Kernel should be able to fix up userspace shadow stack just fine.
ok, will send follow up fix
>
> > + if (!should_optimize(auprobe))
> > + return;
> > +
> > + mmap_write_lock(mm);
> > +
> > + /*
> > + * Check if some other thread already optimized the uprobe for us,
> > + * if it's the case just go away silently.
> > + */
> > + if (copy_from_vaddr(mm, vaddr, &insn, 5))
> > + goto unlock;
> > + if (!is_swbp_insn((uprobe_opcode_t*) &insn))
> > + goto unlock;
> > +
> > + /*
> > + * If we fail to optimize the uprobe we set the fail bit so the
> > + * above should_optimize will fail from now on.
> > + */
> > + if (__arch_uprobe_optimize(auprobe, mm, vaddr))
> > + set_bit(ARCH_UPROBE_FLAG_OPTIMIZE_FAIL, &auprobe->flags);
> > +
> > +unlock:
> > + mmap_write_unlock(mm);
> > +}
> > +
> > +static bool can_optimize(struct arch_uprobe *auprobe, unsigned long vaddr)
> > +{
> > + if (memcmp(&auprobe->insn, x86_nops[5], 5))
> > + return false;
> > + /* We can't do cross page atomic writes yet. */
> > + return PAGE_SIZE - (vaddr & ~PAGE_MASK) >= 5;
> > +}
>
> This seems needlessly restrictive. Something like:
>
> is_nop5(const char *buf)
> {
> struct insn insn;
>
> ret = insn_decode_kernel(&insn, buf)
> if (ret < 0)
> return false;
>
> if (insn.length != 5)
> return false;
>
> if (insn.opcode[0] != 0x0f ||
> insn.opcode[1] != 0x1f)
> return false;
>
> return true;
> }
>
> Should do I suppose.
ok, looks good, should I respin with this, or is follow up ok?
> Anyway, I think something like:
>
> f0 0f 1f 44 00 00 lock nopl 0(%eax, %eax, 1)
>
> is a valid NOP5 at +1 and will 'optimize' and result in:
>
> f0 e8 disp32 lock call disp32
>
> which will #UD.
>
> But this is nearly unfixable. Just doing my best to find weirdo cases
> ;-)
nice, but I think if user puts not-optimized uprobe in the middle of the
instruction like to lock-nop5 + 1 the app would crash as well
thanks,
jirka
Powered by blists - more mailing lists