[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A4DCD54.1080908@gmail.com>
Date: Fri, 03 Jul 2009 11:20:20 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Ingo Molnar <mingo@...e.hu>
CC: Jiri Olsa <jolsa@...hat.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
fbl@...hat.com, nhorman@...hat.com, davem@...hat.com,
htejun@...il.com, jarkao2@...il.com, oleg@...hat.com,
davidel@...ilserver.org
Subject: Re: [PATCHv5 2/2] memory barrier: adding smp_mb__after_lock
Ingo Molnar a écrit :
> * Jiri Olsa <jolsa@...hat.com> wrote:
>
>> +++ b/arch/x86/include/asm/spinlock.h
>> @@ -302,4 +302,7 @@ static inline void __raw_write_unlock(raw_rwlock_t *rw)
>> #define _raw_read_relax(lock) cpu_relax()
>> #define _raw_write_relax(lock) cpu_relax()
>>
>> +/* The {read|write|spin}_lock() on x86 are full memory barriers. */
>> +#define smp_mb__after_lock() do { } while (0)
>
> Two small stylistic comments, please make this an inline function:
>
> static inline void smp_mb__after_lock(void) { }
> #define smp_mb__after_lock
>
> (untested)
>
>> +/* The lock does not imply full memory barrier. */
>> +#ifndef smp_mb__after_lock
>> +#define smp_mb__after_lock() smp_mb()
>> +#endif
>
> ditto.
>
> Ingo
This was following existing implementations of various smp_mb__??? helpers :
# grep -4 smp_mb__before_clear_bit include/asm-generic/bitops.h
/*
* clear_bit may not imply a memory barrier
*/
#ifndef smp_mb__before_clear_bit
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
#endif
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists