lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 27 May 2011 14:14:19 -0700 From: Arun Sharma <asharma@...com> To: Eric Dumazet <eric.dumazet@...il.com> Cc: Arun Sharma <asharma@...com>, David Miller <davem@...emloft.net>, Maximilian Engelhardt <maxi@...monizer.de>, linux-kernel@...r.kernel.org, netdev@...r.kernel.org, StuStaNet Vorstand <vorstand@...sta.mhn.de>, Yann Dupont <Yann.Dupont@...v-nantes.fr>, Denys Fedoryshchenko <denys@...p.net.lb>, Ingo Molnar <mingo@...e.hu>, Thomas Gleixner <tglx@...utronix.de> Subject: Re: Kernel crash after using new Intel NIC (igb) On Fri, May 27, 2011 at 09:56:59PM +0200, Eric Dumazet wrote: > > > > This looks very similar to atomic_add_unless(). If we had a > > > > __atomic_add_unless() that returned "old", we could then do: > > > > atomic_add_unless() { return __atomic_add_unless() != u } > > atomic_add_unless_return() { return __atomic_add_unless() + a} > > > > Sure ! > > I preferred to not touch lot of files in kernel (atomic_add_unless() is > defined in several files) because its a stable candidate patch (2.6.36+) > > So a cleanup patch for 2.6.40+ is certainly doable, do you want to do > this ? The attached works for me for x86_64. Cc'ing Ingo/Thomas for comment. -Arun atomic: Refactor atomic_add_unless Commit 686a7e3 (inetpeer: fix race in unused_list manipulations) in net-2.6 added a atomic_add_unless_return() variant that tries to detect 0->1 transitions of an atomic reference count. This sounds like a generic functionality that could be expressed in terms of an __atomic_add_unless() that returned the old value instead of a bool. Signed-off-by: Arun Sharma <asharma@...com> --- arch/x86/include/asm/atomic.h | 22 ++++++++++++++++++---- 1 files changed, 18 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h index 952a826..bbdbffe 100644 --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -221,15 +221,15 @@ static inline int atomic_xchg(atomic_t *v, int new) } /** - * atomic_add_unless - add unless the number is already a given value + * __atomic_add_unless - add unless the number is already a given value * @v: pointer of type atomic_t * @a: the amount to add to v... * @u: ...unless v is equal to u. * * Atomically adds @a to @v, so long as @v was not already @u. - * Returns non-zero if @v was not @u, and zero otherwise. + * Returns the old value of v */ -static inline int atomic_add_unless(atomic_t *v, int a, int u) +static inline int __atomic_add_unless(atomic_t *v, int a, int u) { int c, old; c = atomic_read(v); @@ -241,7 +241,21 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u) break; c = old; } - return c != (u); + return c; +} + +/** + * atomic_add_unless - add unless the number is already a given value + * @v: pointer of type atomic_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, so long as @v was not already @u. + * Returns non-zero if @v was not @u, and zero otherwise. + */ +static inline int atomic_add_unless(atomic_t *v, int a, int u) +{ + return __atomic_add_unless(v, a, u) != u; } #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) -- 1.7.4 -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists