[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090421092223.GA19049@ioremap.net>
Date: Tue, 21 Apr 2009 13:22:23 +0400
From: Evgeniy Polyakov <zbr@...emap.net>
To: Eric Dumazet <dada1@...mosbay.com>
Cc: Lai Jiangshan <laijs@...fujitsu.com>,
Stephen Hemminger <shemminger@...tta.com>,
Paul Mackerras <paulus@...ba.org>, paulmck@...ux.vnet.ibm.com,
David Miller <davem@...emloft.net>, kaber@...sh.net,
torvalds@...ux-foundation.org, jeff.chua.linux@...il.com,
mingo@...e.hu, jengelh@...ozas.de, r000n@...0n.net,
linux-kernel@...r.kernel.org, netfilter-devel@...r.kernel.org,
netdev@...r.kernel.org, benh@...nel.crashing.org,
mathieu.desnoyers@...ymtl.ca
Subject: Re: [PATCH] netfilter: use per-cpu recursive lock (v11)
On Tue, Apr 21, 2009 at 10:55:59AM +0200, Eric Dumazet (dada1@...mosbay.com) wrote:
> Maybe just dont care about calling several time local_bh_disable()
> (since we were doing this in previous kernels anyway, we used to call read_lock_bh())
>
> This shortens fastpath, is faster than local_irq_save()/local_irq_restore(),
> and looks better.
Yeah, given that non-nested locking is more likely condition, it will be
even faster than preemption case.
> void xt_info_rdlock_bh(void)
> {
> struct xt_info_lock *lock;
>
> local_bh_disable();
> lock = &__get_cpu_var(xt_info_locks);
> if (likely(++lock->depth == 0))
> spin_lock(&lock->lock);
> }
>
> void xt_info_rdunlock_bh(void)
> {
> struct xt_info_lock *lock = &__get_cpu_var(xt_info_locks);
>
> BUG_ON(lock->depth < 0);
> if (likely(--lock->depth < 0))
> spin_unlock(&lock->lock);
> local_bh_enable();
> }
>
>
--
Evgeniy Polyakov
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists