lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250119193134.0ebd56bc@gandalf.local.home>
Date: Sun, 19 Jan 2025 19:31:34 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: LKML <linux-kernel@...r.kernel.org>, Masami Hiramatsu
 <mhiramat@...nel.org>, Mark Rutland <mark.rutland@....com>, Mathieu
 Desnoyers <mathieu.desnoyers@...icios.com>, Sven Schnelle
 <svens@...ux.ibm.com>
Subject: [GIT PULL] ftrace: Updates for v6.14


Linus,

ftrace updates for v6.14:

- Have fprobes built on top of function graph infrastructure

  The fprobe logic is an optimized kprobe that uses ftrace to attach to
  functions when a probe is needed at the start or end of the function. The
  fprobe and kretprobe logic implements a similar method as the function
  graph tracer to trace the end of the function. That is to hijack the
  return address and jump to a trampoline to do the trace when the function
  exits. To do this, a shadow stack needs to be created to store the
  original return address.  Fprobes and function graph do this slightly
  differently. Fprobes (and kretprobes) has slots per callsite that are
  reserved to save the return address. This is fine when just a few points
  are traced. But users of fprobes, such as BPF programs, are starting to add
  many more locations, and this method does not scale.

  The function graph tracer was created to trace all functions in the
  kernel. In order to do this, when function graph tracing is started, every
  task gets its own shadow stack to hold the return address that is going to
  be traced. The function graph tracer has been updated to allow multiple
  users to use its infrastructure. Now have fprobes be one of those users.
  This will also allow for the fprobe and kretprobe methods to trace the
  return address to become obsolete. With new technologies like CFI that
  need to know about these methods of hijacking the return address, going
  toward a solution that has only one method of doing this will make the
  kernel less complex.

- Cleanup with guard() and free() helpers

  There were several places in the code that had a lot of "goto out" in the
  error paths to either unlock a lock or free some memory that was
  allocated. But this is error prone. Convert the code over to use the
  guard() and free() helpers that let the compiler unlock locks or free
  memory when the function exits.

- Remove disabling of interrupts in the function graph tracer

  When function graph tracer was first introduced, it could race with
  interrupts and NMIs. To prevent that race, it would disable interrupts and
  not trace NMIs. But the code has changed to allow NMIs and also
  interrupts. This change was done a long time ago, but the disabling of
  interrupts was never removed. Remove the disabling of interrupts in the
  function graph tracer is it is not needed. This greatly improves its
  performance.

- Allow the :mod: command to enable tracing module functions on the kernel
  command line.

  The function tracer already has a way to enable functions to be traced in
  modules by writing ":mod:<module>" into set_ftrace_filter. That will
  enable either all the functions for the module if it is loaded, or if it
  is not, it will cache that command, and when the module is loaded that
  matches <module>, its functions will be enabled. This also allows init
  functions to be traced. But currently events do not have that feature.

  Because enabling function tracing can be done very early at boot up
  (before scheduling is enabled), the commands that can be done when
  function tracing is started is limited. Having the ":mod:" command to
  trace module functions as they are loaded is very useful. Update the
  kernel command line function filtering to allow it.


Please pull the latest ftrace-v6.14 tree, which can be found at:


  git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
ftrace-v6.14

Tag SHA1: 4df2b00c6c7d7a0a0b2cfe423637c0371d3a9128
Head SHA1: 31f505dc70331243fbb54af868c14bb5f44a15bc


Masami Hiramatsu (Google) (20):
      fgraph: Get ftrace recursion lock in function_graph_enter
      fgraph: Pass ftrace_regs to entryfunc
      fgraph: Replace fgraph_ret_regs with ftrace_regs
      fgraph: Pass ftrace_regs to retfunc
      fprobe: Use ftrace_regs in fprobe entry handler
      fprobe: Use ftrace_regs in fprobe exit handler
      tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs
      tracing: Add ftrace_fill_perf_regs() for perf event
      tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
      bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
      ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
      fprobe: Rewrite fprobe on function-graph tracer
      fprobe: Add fprobe_header encoding feature
      tracing/fprobe: Remove nr_maxactive from fprobe
      selftests: ftrace: Remove obsolate maxactive syntax check
      selftests/ftrace: Add a test case for repeating register/unregister fprobe
      Documentation: probes: Update fprobe on function-graph tracer
      ftrace: Add ftrace_get_symaddr to convert fentry_ip to symaddr
      bpf: Use ftrace_get_symaddr() for kprobe_multi probes
      tracing: Adopt __free() and guard() for trace_fprobe.c

Steven Rostedt (5):
      fgraph: Remove unnecessary disabling of interrupts and recursion
      ftrace: Do not disable interrupts in profiler
      ftrace: Remove unneeded goto jumps
      ftrace: Switch ftrace.c code over to use guard()
      ftrace: Implement :mod: cache filtering on kernel command line

Sven Schnelle (1):
      s390/tracing: Enable HAVE_FTRACE_GRAPH_FUNC

----
 Documentation/trace/fprobe.rst                     |  42 +-
 arch/arm64/Kconfig                                 |   2 +
 arch/arm64/include/asm/Kbuild                      |   1 +
 arch/arm64/include/asm/ftrace.h                    |  51 +-
 arch/arm64/kernel/asm-offsets.c                    |  12 -
 arch/arm64/kernel/entry-ftrace.S                   |  32 +-
 arch/arm64/kernel/ftrace.c                         |  78 ++-
 arch/loongarch/Kconfig                             |   4 +-
 arch/loongarch/include/asm/fprobe.h                |  12 +
 arch/loongarch/include/asm/ftrace.h                |  32 +-
 arch/loongarch/kernel/asm-offsets.c                |  12 -
 arch/loongarch/kernel/ftrace_dyn.c                 |  10 +-
 arch/loongarch/kernel/mcount.S                     |  17 +-
 arch/loongarch/kernel/mcount_dyn.S                 |  14 +-
 arch/powerpc/Kconfig                               |   1 +
 arch/powerpc/include/asm/ftrace.h                  |  13 +
 arch/powerpc/kernel/trace/ftrace.c                 |   8 +-
 arch/powerpc/kernel/trace/ftrace_64_pg.c           |  16 +-
 arch/riscv/Kconfig                                 |   3 +-
 arch/riscv/include/asm/Kbuild                      |   1 +
 arch/riscv/include/asm/ftrace.h                    |  45 +-
 arch/riscv/kernel/ftrace.c                         |  17 +-
 arch/riscv/kernel/mcount.S                         |  24 +-
 arch/s390/Kconfig                                  |   4 +-
 arch/s390/include/asm/fprobe.h                     |  10 +
 arch/s390/include/asm/ftrace.h                     |  37 +-
 arch/s390/kernel/asm-offsets.c                     |   6 -
 arch/s390/kernel/entry.h                           |   1 -
 arch/s390/kernel/ftrace.c                          |  48 +-
 arch/s390/kernel/mcount.S                          |  23 +-
 arch/x86/Kconfig                                   |   4 +-
 arch/x86/include/asm/Kbuild                        |   1 +
 arch/x86/include/asm/ftrace.h                      |  54 +-
 arch/x86/kernel/ftrace.c                           |  47 +-
 arch/x86/kernel/ftrace_32.S                        |  13 +-
 arch/x86/kernel/ftrace_64.S                        |  17 +-
 include/asm-generic/fprobe.h                       |  46 ++
 include/linux/fprobe.h                             |  62 +-
 include/linux/ftrace.h                             | 116 +++-
 include/linux/ftrace_regs.h                        |   2 +
 kernel/trace/Kconfig                               |  22 +-
 kernel/trace/bpf_trace.c                           |  28 +-
 kernel/trace/fgraph.c                              |  65 +-
 kernel/trace/fprobe.c                              | 664 +++++++++++++++------
 kernel/trace/ftrace.c                              | 203 ++++---
 kernel/trace/trace.c                               |   8 +
 kernel/trace/trace.h                               |   8 +-
 kernel/trace/trace_fprobe.c                        | 270 +++++----
 kernel/trace/trace_functions_graph.c               |  47 +-
 kernel/trace/trace_irqsoff.c                       |   6 +-
 kernel/trace/trace_probe_tmpl.h                    |   2 +-
 kernel/trace/trace_sched_wakeup.c                  |   6 +-
 kernel/trace/trace_selftest.c                      |  11 +-
 lib/test_fprobe.c                                  |  51 +-
 samples/fprobe/fprobe_example.c                    |   4 +-
 .../test.d/dynevent/add_remove_fprobe_repeat.tc    |  19 +
 .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc |   4 +-
 57 files changed, 1504 insertions(+), 852 deletions(-)
 create mode 100644 arch/loongarch/include/asm/fprobe.h
 create mode 100644 arch/s390/include/asm/fprobe.h
 create mode 100644 include/asm-generic/fprobe.h
 create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
---------------------------

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ