3.18.36-rt38-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Peter Zijlstra Vikram reported that his ARM64 compiler managed to 'optimize' away the preempt_count manipulations in code like: preempt_enable_no_resched(); put_user(); preempt_disable(); Irrespective of that fact that that is horrible code that should be fixed for many reasons, it does highlight a deficiency in the generic preempt_count manipulators. As it is never right to combine/elide preempt_count manipulations like this. Therefore sprinkle some volatile in the two generic accessors to ensure the compiler is aware of the fact that the preempt_count is observed outside of the regular program-order view and thus cannot be optimized away like this. x86; the only arch not using the generic code is not affected as we do all this in asm in order to use the segment base per-cpu stuff. Cc: stable@vger.kernel.org Cc: stable-rt@vger.kernel.org Cc: Thomas Gleixner Fixes: a787870924db ("sched, arch: Create asm/preempt.h") Reported-by: Vikram Mulukutla Tested-by: Vikram Mulukutla Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt --- include/asm-generic/preempt.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h index 1cd3f5d767a8..ed1881dd9b36 100644 --- a/include/asm-generic/preempt.h +++ b/include/asm-generic/preempt.h @@ -7,10 +7,10 @@ static __always_inline int preempt_count(void) { - return current_thread_info()->preempt_count; + return READ_ONCE(current_thread_info()->preempt_count); } -static __always_inline int *preempt_count_ptr(void) +static __always_inline volatile int *preempt_count_ptr(void) { return ¤t_thread_info()->preempt_count; } -- 2.8.1