[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1306561285.2533.9.camel@edumazet-laptop>
Date: Sat, 28 May 2011 07:41:25 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Arun Sharma <asharma@...com>
Cc: David Miller <davem@...emloft.net>,
Maximilian Engelhardt <maxi@...monizer.de>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
StuStaNet Vorstand <vorstand@...sta.mhn.de>,
Yann Dupont <Yann.Dupont@...v-nantes.fr>,
Denys Fedoryshchenko <denys@...p.net.lb>,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: Kernel crash after using new Intel NIC (igb)
Le vendredi 27 mai 2011 à 14:14 -0700, Arun Sharma a écrit :
> The attached works for me for x86_64. Cc'ing Ingo/Thomas for comment.
>
> -Arun
>
> atomic: Refactor atomic_add_unless
>
> Commit 686a7e3 (inetpeer: fix race in unused_list manipulations)
> in net-2.6 added a atomic_add_unless_return() variant that tries
> to detect 0->1 transitions of an atomic reference count.
>
> This sounds like a generic functionality that could be expressed
> in terms of an __atomic_add_unless() that returned the old value
> instead of a bool.
>
> Signed-off-by: Arun Sharma <asharma@...com>
> ---
> arch/x86/include/asm/atomic.h | 22 ++++++++++++++++++----
> 1 files changed, 18 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
> index 952a826..bbdbffe 100644
> --- a/arch/x86/include/asm/atomic.h
> +++ b/arch/x86/include/asm/atomic.h
> @@ -221,15 +221,15 @@ static inline int atomic_xchg(atomic_t *v, int new)
> }
>
> /**
> - * atomic_add_unless - add unless the number is already a given value
> + * __atomic_add_unless - add unless the number is already a given value
> * @v: pointer of type atomic_t
> * @a: the amount to add to v...
> * @u: ...unless v is equal to u.
> *
> * Atomically adds @a to @v, so long as @v was not already @u.
> - * Returns non-zero if @v was not @u, and zero otherwise.
> + * Returns the old value of v
> */
> -static inline int atomic_add_unless(atomic_t *v, int a, int u)
> +static inline int __atomic_add_unless(atomic_t *v, int a, int u)
> {
> int c, old;
> c = atomic_read(v);
> @@ -241,7 +241,21 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u)
> break;
> c = old;
> }
> - return c != (u);
> + return c;
> +}
> +
> +/**
> + * atomic_add_unless - add unless the number is already a given value
> + * @v: pointer of type atomic_t
> + * @a: the amount to add to v...
> + * @u: ...unless v is equal to u.
> + *
> + * Atomically adds @a to @v, so long as @v was not already @u.
> + * Returns non-zero if @v was not @u, and zero otherwise.
> + */
> +static inline int atomic_add_unless(atomic_t *v, int a, int u)
> +{
> + return __atomic_add_unless(v, a, u) != u;
> }
>
> #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)
As I said, atomic_add_unless() has several implementations in various
arches. You must take care of all, not only x86.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists