lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87msqsotr8.fsf@all.your.base.are.belong.to.us>
Date: Thu, 21 Mar 2024 09:48:27 +0100
From: Björn Töpel <bjorn@...nel.org>
To: Andy Chiu <andy.chiu@...ive.com>
Cc: Puranjay Mohan <puranjay12@...il.com>, Mark Rutland
 <mark.rutland@....com>, Paul Walmsley <paul.walmsley@...ive.com>, Palmer
 Dabbelt <palmer@...belt.com>, Albert Ou <aou@...s.berkeley.edu>, Steven
 Rostedt <rostedt@...dmis.org>, Masami Hiramatsu <mhiramat@...nel.org>,
 Sami Tolvanen <samitolvanen@...gle.com>, Guo Ren <guoren@...nel.org>, Ley
 Foon Tan <leyfoon.tan@...rfivetech.com>, Deepak Gupta
 <debug@...osinc.com>, Sia Jee Heng <jeeheng.sia@...rfivetech.com>, Bjorn
 Topel <bjorn@...osinc.com>, Song Shuai <suagrfillet@...il.com>, Cl'ement
 L'eger <cleger@...osinc.com>, Al Viro <viro@...iv.linux.org.uk>, Jisheng
 Zhang <jszhang@...nel.org>, linux-riscv@...ts.infradead.org,
 linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org, Robbin
 Ehn <rehn@...osinc.com>, Brendan Sweeney <brs@...osinc.com>
Subject: Re: [RFC PATCH] riscv: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS

Andy,

Pulling out the A option:

>> > A) Use auipc/jalr, only patch jalr to take us to a common
>> >    dispatcher/trampoline
>> >
>> >  | <func_trace_target_data_8B> # probably on a data cache-line != func .text to avoid ping-pong
>> >  | ...
>> >  | func:
>> >  |   ...make sure ra isn't messed up...
>> >  |   aupic
>> >  |   nop <=> jalr # Text patch point -> common_dispatch
>> >  |   ACTUAL_FUNC
>> >  |
>> >  | common_dispatch:
>> >  |   load <func_trace_target_data_8B> based on ra
>> >  |   jalr
>> >  |   ...
>> >
>> > The auipc is never touched, and will be overhead. Also, we need a mv to
>> > store ra in a scratch register as well -- like Arm. We'll have two insn
>> > per-caller overhead for a disabled caller.
>
> My patch series takes a similar "in-function dispatch" approach. A
> difference is that the <func_trace_target_data_8B_per_function> is
> embedded within each function entry. I'd like to have it moved to a
> run-time allocated array to reduce total text size.

This is what arm64 has as well. It's a 8B + 1-2 dirt cheap movish like
instructions (save ra, prepare jump with auipc). I think that's a
reasonable overhead.

> Another difference is that my series changes the first instruction to
> "j ACTUAL_FUNC" for the "ftrace disable" case. As long as the
> architecture guarantees the atomicity of the first instruction, then
> we are safe. For example, we are safe if the first instruction could
> only be "mv tmp, ra" or "j ACTUAL_FUNC". And since the loaded address is
> always valid, we can fix "mv + jalr" down so we don't have to
> play with the exception handler trick. The guarantee from arch would
> require ziccif (in RVA22) though, but I think it is the same for us
> (unless with stop_machine). For ziccif, I would rather call that out
> during boot than blindly assume.

I'm maybe biased, but I'd prefer the A) over your version with the
unconditional jump. A) has the overhead of two, I'd say, free
instructions (again "Meten is Weten!" ;-)).

> However, one thing I am not very sure is: do we need a destination
> address in a "per-function" manner? It seems like most of the time the
> destination address can only be ftrace_call, or ftrace_regs_call. If
> the number of destination addresses is very few, then we could
> potentially reduce the size of
> <func_trace_target_data_8B_per_function>.

Yes, we do need a per-function manner. BPF, e.g., uses
dynamically/JIT:ed trampolines/targets.



Björn

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ