[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090703111848.GA10267@jolsa.lab.eng.brq.redhat.com>
Date: Fri, 3 Jul 2009 13:18:48 +0200
From: Jiri Olsa <jolsa@...hat.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
fbl@...hat.com, nhorman@...hat.com, davem@...hat.com,
htejun@...il.com, jarkao2@...il.com, oleg@...hat.com,
davidel@...ilserver.org
Subject: Re: [PATCHv5 2/2] memory barrier: adding smp_mb__after_lock
On Fri, Jul 03, 2009 at 12:25:30PM +0200, Ingo Molnar wrote:
>
> * Jiri Olsa <jolsa@...hat.com> wrote:
>
> > On Fri, Jul 03, 2009 at 11:24:38AM +0200, Ingo Molnar wrote:
> > >
> > > * Eric Dumazet <eric.dumazet@...il.com> wrote:
> > >
> > > > Ingo Molnar a écrit :
> > > > > * Jiri Olsa <jolsa@...hat.com> wrote:
> > > > >
> > > > >> +++ b/arch/x86/include/asm/spinlock.h
> > > > >> @@ -302,4 +302,7 @@ static inline void __raw_write_unlock(raw_rwlock_t *rw)
> > > > >> #define _raw_read_relax(lock) cpu_relax()
> > > > >> #define _raw_write_relax(lock) cpu_relax()
> > > > >>
> > > > >> +/* The {read|write|spin}_lock() on x86 are full memory barriers. */
> > > > >> +#define smp_mb__after_lock() do { } while (0)
> > > > >
> > > > > Two small stylistic comments, please make this an inline function:
> > > > >
> > > > > static inline void smp_mb__after_lock(void) { }
> > > > > #define smp_mb__after_lock
> > > > >
> > > > > (untested)
> > > > >
> > > > >> +/* The lock does not imply full memory barrier. */
> > > > >> +#ifndef smp_mb__after_lock
> > > > >> +#define smp_mb__after_lock() smp_mb()
> > > > >> +#endif
> > > > >
> > > > > ditto.
> > > > >
> > > > > Ingo
> > > >
> > > > This was following existing implementations of various smp_mb__??? helpers :
> > > >
> > > > # grep -4 smp_mb__before_clear_bit include/asm-generic/bitops.h
> > > >
> > > > /*
> > > > * clear_bit may not imply a memory barrier
> > > > */
> > > > #ifndef smp_mb__before_clear_bit
> > > > #define smp_mb__before_clear_bit() smp_mb()
> > > > #define smp_mb__after_clear_bit() smp_mb()
> > > > #endif
> > >
> > > Did i mention that those should be fixed too? :-)
> > >
> > > Ingo
> >
> > ok, could I include it in the 2/2 or you prefer separate patch?
>
> depends on whether it will regress ;-)
>
> If it regresses, it's better to have it separate. If it wont, it can
> be included. If unsure, default to the more conservative option.
>
> Ingo
how about this..
and similar change for smp_mb__before_clear_bit in a separate patch
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index b7e5db8..4e77853 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -302,4 +302,8 @@ static inline void __raw_write_unlock(raw_rwlock_t *rw)
#define _raw_read_relax(lock) cpu_relax()
#define _raw_write_relax(lock) cpu_relax()
+/* The {read|write|spin}_lock() on x86 are full memory barriers. */
+static inline void smp_mb__after_lock(void) { }
+#define ARCH_HAS_SMP_MB_AFTER_LOCK
+
#endif /* _ASM_X86_SPINLOCK_H */
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 252b245..4be57ab 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -132,6 +132,11 @@ do { \
#endif /*__raw_spin_is_contended*/
#endif
+/* The lock does not imply full memory barrier. */
+#ifndef ARCH_HAS_SMP_MB_AFTER_LOCK
+static inline void smp_mb__after_lock(void) { smp_mb(); }
+#endif
+
/**
* spin_unlock_wait - wait until the spinlock gets unlocked
* @lock: the spinlock in question.
diff --git a/include/net/sock.h b/include/net/sock.h
index 4eb8409..98afcd9 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1271,6 +1271,9 @@ static inline int sk_has_allocations(const struct sock *sk)
* in its cache, and so does the tp->rcv_nxt update on CPU2 side. The CPU1
* could then endup calling schedule and sleep forever if there are no more
* data on the socket.
+ *
+ * The sk_has_helper is always called right after a call to read_lock, so we
+ * can use smp_mb__after_lock barrier.
*/
static inline int sk_has_sleeper(struct sock *sk)
{
@@ -1280,7 +1283,7 @@ static inline int sk_has_sleeper(struct sock *sk)
*
* This memory barrier is paired in the sock_poll_wait.
*/
- smp_mb();
+ smp_mb__after_lock();
return sk->sk_sleep && waitqueue_active(sk->sk_sleep);
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists