[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1946607.CQOukoFCf9@7950hx>
Date: Fri, 14 Nov 2025 18:57:39 +0800
From: Menglong Dong <menglong.dong@...ux.dev>
To: menglong8.dong@...il.com, ast@...nel.org, rostedt@...dmis.org,
bot+bpf-ci@...nel.org
Cc: daniel@...earbox.net, john.fastabend@...il.com, andrii@...nel.org,
martin.lau@...ux.dev, eddyz87@...il.com, song@...nel.org,
yonghong.song@...ux.dev, kpsingh@...nel.org, sdf@...ichev.me,
haoluo@...gle.com, jolsa@...nel.org, mhiramat@...nel.org,
mark.rutland@....com, mathieu.desnoyers@...icios.com, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
ast@...nel.org, andrii@...nel.org, daniel@...earbox.net,
martin.lau@...nel.org, eddyz87@...il.com, yonghong.song@...ux.dev,
clm@...a.com, ihor.solodrai@...ux.dev
Subject: Re: [PATCH RFC bpf-next 1/7] ftrace: introduce FTRACE_OPS_FL_JMP
On 2025/11/14 18:20, bot+bpf-ci@...nel.org wrote:
> > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> > index 7ded7df6e..14705dec1 100644
> > --- a/include/linux/ftrace.h
> > +++ b/include/linux/ftrace.h
> > @@ -351,6 +351,7 @@ enum {
> > FTRACE_OPS_FL_DIRECT = BIT(17),
> > FTRACE_OPS_FL_SUBOP = BIT(18),
> > FTRACE_OPS_FL_GRAPH = BIT(19),
> > + FTRACE_OPS_FL_JMP = BIT(20),
> > };
> >
> > #ifndef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
>
> [ ... ]
>
> > +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_JMP
> > +static inline bool ftrace_is_jmp(unsigned long addr)
> > +{
> > + return addr & 1;
> > +}
> > +
> > +static inline unsigned long ftrace_jmp_set(unsigned long addr)
> > +{
> > + return addr | 1UL;
> > +}
> > +
> > +static inline unsigned long ftrace_jmp_get(unsigned long addr)
> > +{
> > + return addr & ~1UL;
> > +}
>
> [ ... ]
>
> > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> > index 42bd2ba68..b143a79e6 100644
> > --- a/kernel/trace/ftrace.c
> > +++ b/kernel/trace/ftrace.c
> > @@ -5938,7 +5938,8 @@ static void remove_direct_functions_hash(struct ftrace_hash *hash, unsigned long
> > for (i = 0; i < size; i++) {
> > hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
> > del = __ftrace_lookup_ip(direct_functions, entry->ip);
> > - if (del && del->direct == addr) {
> > + if (del && ftrace_jmp_get(del->direct) ==
> > + ftrace_jmp_get(addr)) {
> > remove_hash_entry(direct_functions, del);
> > kfree(del);
> > }
>
> [ ... ]
>
> > @@ -5994,6 +5995,9 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
> >
> > mutex_lock(&direct_mutex);
> >
> > + if (ops->flags & FTRACE_OPS_FL_JMP)
> > + addr = ftrace_jmp_set(addr);
> > +
>
> When FTRACE_OPS_FL_JMP is set, this encodes the flag into the last bit
> of addr. The modified addr is then stored in entry->direct and
> ops->direct_call. However, this flagged address later flows through
> ftrace_find_rec_direct()->ftrace_get_addr_new()->ftrace_make_call()->
> ftrace_call_replace() without being masked.
>
> Can ftrace_call_replace() generate correct CALL instructions when addr
> has the LSB set? On x86, ftrace_call_replace() calls text_gen_insn()
> with the raw address to generate a relative CALL instruction. If addr
> has bit 0 set to 1, this would generate a CALL to an odd address,
> causing execution to skip the first instruction byte at the trampoline.
If DYNAMIC_FTRACE_WITH_JMP is not enabled, ftrace_jmp_set() will
return the addr directly, so it's OK on this case. When
DYNAMIC_FTRACE_WITH_JMP is enabled, take x86_64 for example
in the next patch, it will get the real address in ftrace_call_replace()
with ftrace_jmp_get().
The arch-specification things is split to the next patch.
>
> Similarly, at runtime call_direct_funcs() reads ops->direct_call and
> passes it to arch_ftrace_set_direct_caller() without masking. The x86
> assembly then does RET to this address, which would jump to addr+1 if
> the LSB is set.
>
> The commit message says "we can tell if we should use 'jmp' for the
> callback in ftrace_call_replace()" but ftrace_call_replace() isn't
> modified to check or mask the bit. Are there missing changes to handle
> the encoded address in these paths?
It is modified in the next patch. And in order to reduce the risk, we
can wrap the code here with CONFIG_DYNAMIC_FTRACE_WITH_JMP.
>
> > /* Make sure requested entries are not already registered.. */
> > size = 1 << hash->size_bits;
> > for (i = 0; i < size; i++) {
>
> [ ... ]
>
> > @@ -6117,6 +6121,9 @@ __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
> >
> > lockdep_assert_held_once(&direct_mutex);
> >
> > + if (ops->flags & FTRACE_OPS_FL_JMP)
> > + addr = ftrace_jmp_set(addr);
> > +
>
> Same issue here - the flagged addr is stored but downstream consumers
> don't mask it before using as a jump target.
>
>
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
>
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/19360353328
>
Powered by blists - more mailing lists