lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 18 Jun 2012 10:01:38 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	LKML <linux-kernel@...r.kernel.org>
Cc:	Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Fengguang Wu <wfg@...ux.intel.com>
Subject: [PATCH v2][GIT PULL][v3.5] ftrace: Make all inline tags also
 include notrace


Ingo,

I updated the change log as you recommended, there was no code change.

-- Steve

Please pull the latest tip/perf/urgent-2 tree, which can be found at:

  git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
tip/perf/urgent-2

Head SHA1: 93b3cca1ccd30b1ad290951a3fc7c10c73db7313


Steven Rostedt (1):
      ftrace: Make all inline tags also include notrace

----
 include/linux/compiler-gcc.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
---------------------------
commit 93b3cca1ccd30b1ad290951a3fc7c10c73db7313
Author: Steven Rostedt <rostedt@...dmis.org>
Date:   Thu Jun 14 10:54:28 2012 -0400

    ftrace: Make all inline tags also include notrace
    
    Commit 5963e317b1e9d2a ("ftrace/x86: Do not change stacks in DEBUG when
    calling lockdep") prevented lockdep calls from the int3 breakpoint handler
    from reseting the stack if a function that was called was in the process
    of being converted for tracing and had a breakpoint on it. The idea is,
    before calling the lockdep code, do a load_idt() to the special IDT that
    kept the breakpoint stack from reseting. This worked well as a quick fix
    for this kernel release, until a certain config caused a lockup in the
    function tracer start up tests.
    
    Investigating it, I found that the load_idt that was used to prevent
    the int3 from changing stacks was itself being traced!
    
    Even though the config had CONFIG_OPTIMIZE_INLINING disabled, and
    all 'inline' tags were set to always inline, there were still cases that
    it did not inline! This was caused by CONFIG_PARAVIRT_GUEST, where it
    would add a pointer to the native_load_idt() which made that function
    to be traced.
    
    Commit 45959ee7aa645815a ("ftrace: Do not function trace inlined functions")
    only touched the 'inline' tags when CONFIG_OPMITIZE_INLINING was enabled.
    PARAVIRT_GUEST shows that this was not enough and we need to also
    mark always_inline with notrace as well.
    
    Reported-by: Fengguang Wu <wfg@...ux.intel.com>
    Tested-by: Fengguang Wu <wfg@...ux.intel.com>
    Signed-off-by: Steven Rostedt <rostedt@...dmis.org>

diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index e5834aa..6a6d7ae 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -47,9 +47,9 @@
  */
 #if !defined(CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING) || \
     !defined(CONFIG_OPTIMIZE_INLINING) || (__GNUC__ < 4)
-# define inline		inline		__attribute__((always_inline))
-# define __inline__	__inline__	__attribute__((always_inline))
-# define __inline	__inline	__attribute__((always_inline))
+# define inline		inline		__attribute__((always_inline)) notrace
+# define __inline__	__inline__	__attribute__((always_inline)) notrace
+# define __inline	__inline	__attribute__((always_inline)) notrace
 #else
 /* A lot of inline functions can cause havoc with function tracing */
 # define inline		inline		notrace


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists