[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sev78dnz.fsf@all.your.base.are.belong.to.us>
Date: Wed, 14 Aug 2024 14:57:52 +0200
From: Björn Töpel <bjorn@...nel.org>
To: Andy Chiu <andy.chiu@...ive.com>, Paul Walmsley
<paul.walmsley@...ive.com>, Palmer Dabbelt <palmer@...belt.com>, Albert Ou
<aou@...s.berkeley.edu>, Alexandre Ghiti <alexghiti@...osinc.com>, Zong Li
<zong.li@...ive.com>, Steven Rostedt <rostedt@...dmis.org>, Masami
Hiramatsu <mhiramat@...nel.org>, Mark Rutland <mark.rutland@....com>,
Nathan Chancellor <nathan@...nel.org>, Nick Desaulniers
<ndesaulniers@...gle.com>, Bill Wendling <morbo@...gle.com>, Justin Stitt
<justinstitt@...gle.com>, Puranjay Mohan <puranjay@...nel.org>
Cc: Palmer Dabbelt <palmer@...osinc.com>, linux-riscv@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
llvm@...ts.linux.dev, Andy Chiu <andy.chiu@...ive.com>
Subject: Re: [PATCH v2 3/6] riscv: ftrace: prepare ftrace for atomic code
patching
Björn Töpel <bjorn@...nel.org> writes:
> Andy Chiu <andy.chiu@...ive.com> writes:
>
>> We use an AUIPC+JALR pair to jump into a ftrace trampoline. Since
>> instruction fetch can break down to 4 byte at a time, it is impossible
>> to update two instructions without a race. In order to mitigate it, we
>> initialize the patchable entry to AUIPC + NOP4. Then, the run-time code
>> patching can change NOP4 to JALR to eable/disable ftrcae from a
> enable ftrace
>
>> function. This limits the reach of each ftrace entry to +-2KB displacing
>> from ftrace_caller.
>>
>> Starting from the trampoline, we add a level of indirection for it to
>> reach ftrace caller target. Now, it loads the target address from a
>> memory location, then perform the jump. This enable the kernel to update
>> the target atomically.
>
> The +-2K limit is for direct calls, right?
>
> ...and this I would say breaks DIRECT_CALLS (which should be implemented
> using call_ops later)?
Thinking a bit more, and re-reading the series.
This series is good work, and it's a big improvement for DYNAMIC_FTRACE,
but
+int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+{
+ unsigned long distance, orig_addr;
+
+ orig_addr = (unsigned long)&ftrace_caller;
+ distance = addr > orig_addr ? addr - orig_addr : orig_addr - addr;
+ if (distance > JALR_RANGE)
+ return -EINVAL;
+
+ return __ftrace_modify_call(rec->ip, addr, false);
+}
+
breaks WITH_DIRECT_CALLS. The direct trampoline will *never* be within
the JALR_RANGE.
Unless we're happy with a break (I'm not) -- I really think Puranjay's
CALL_OPS patch needs to be baked in in the series!
Björn
Powered by blists - more mailing lists