[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1473338719-18647-1-git-send-email-zhang.chunyan@linaro.org>
Date: Thu, 8 Sep 2016 20:45:19 +0800
From: Chunyan Zhang <zhang.chunyan@...aro.org>
To: rostedt@...dmis.org, mingo@...hat.com
Cc: zhang.lyra@...il.com, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.orgi, takahiro.akashi@...aro.org,
mark.yang@...eadtrum.com
Subject: [PATCH] arm64: use preempt_disable_notrace in _percpu_read/write
When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
can be traced by function and function graph tracing, and
preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
subsystem we should use preempt_disable/enable_notrace instead.
In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
like events do") the function this_cpu_read() was added to
trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
tracer will go into a recursive loop, even if the tracing_on is
disabled.
So this patch change to use preempt_enable/disable_notrace instead in
this_cpu_read().
Since Yonghui Yang helped a lot to find the root cause of this problem,
so also add his SOB.
Signed-off-by: Yonghui Yang <mark.yang@...eadtrum.com>
Signed-off-by: Chunyan Zhang <zhang.chunyan@...aro.org>
---
arch/arm64/include/asm/percpu.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
index 0a456be..2fee2f5 100644
--- a/arch/arm64/include/asm/percpu.h
+++ b/arch/arm64/include/asm/percpu.h
@@ -199,19 +199,19 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
#define _percpu_read(pcp) \
({ \
typeof(pcp) __retval; \
- preempt_disable(); \
+ preempt_disable_notrace(); \
__retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), \
sizeof(pcp)); \
- preempt_enable(); \
+ preempt_enable_notrace(); \
__retval; \
})
#define _percpu_write(pcp, val) \
do { \
- preempt_disable(); \
+ preempt_disable_notrace(); \
__percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), \
sizeof(pcp)); \
- preempt_enable(); \
+ preempt_enable_notrace(); \
} while(0) \
#define _pcp_protect(operation, pcp, val) \
--
2.7.4
Powered by blists - more mailing lists