From ef5e7cca16541bd0d9fb36d13c7e40c6ecc4fd77 Mon Sep 17 00:00:00 2001 From: Led Date: Wed, 31 Mar 2010 21:09:20 +0300 Subject: [PATCH] x86: avoid 'constant_test_bit()' misoptimization due to cast to non-volatile While debugging bit_spin_lock() hang, it was tracked down to gcc-4.4 misoptimization of constant_test_bit() when 'const volatile unsigned long *addr' cast to 'unsigned long *' with subsequent unconditional jump to pause (and not to the test) leading to hang. Compiling with gcc-4.3 or disabling CONFIG_OPTIMIZE_INLINING yields inlined constant_test_bit() and correct jump. Other arches than asm-x86 may implement this slightly differently; 2.6.29 mitigates the misoptimization by changing the function prototype (commit c4295fbb6048d85f0b41c5ced5cbf63f6811c46c) but probably fixing the issue itself is better. --- include/asm-x86/bitops.h | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/include/asm-x86/bitops.h b/include/asm-x86/bitops.h index cfb2b64..89352ed 100644 --- a/include/asm-x86/bitops.h +++ b/include/asm-x86/bitops.h @@ -295,7 +295,7 @@ static inline int test_and_change_bit(int nr, volatile unsigned long *addr) static inline int constant_test_bit(int nr, const volatile unsigned long *addr) { return ((1UL << (nr % BITS_PER_LONG)) & - (((unsigned long *)addr)[nr / BITS_PER_LONG])) != 0; + (addr[nr / BITS_PER_LONG])) != 0; } static inline int variable_test_bit(int nr, volatile const unsigned long *addr) -- 1.7.0.3