[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49EE2293.4090201@cosmosbay.com>
Date: Tue, 21 Apr 2009 21:46:27 +0200
From: Eric Dumazet <dada1@...mosbay.com>
To: Ingo Molnar <mingo@...e.hu>
CC: Stephen Hemminger <shemminger@...tta.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Paul Mackerras <paulus@...ba.org>, paulmck@...ux.vnet.ibm.com,
Evgeniy Polyakov <zbr@...emap.net>,
David Miller <davem@...emloft.net>, kaber@...sh.net,
jeff.chua.linux@...il.com, laijs@...fujitsu.com,
jengelh@...ozas.de, r000n@...0n.net, linux-kernel@...r.kernel.org,
netfilter-devel@...r.kernel.org, netdev@...r.kernel.org,
benh@...nel.crashing.org, mathieu.desnoyers@...ymtl.ca
Subject: Re: [PATCH] netfilter: use per-cpu recursive lock (v11)
Ingo Molnar a écrit :
>
> Why not use the obvious solution: a _single_ wrlock for global
> access and read_can_lock() plus per cpu locks in the fastpath?
Obvious is not the qualifier I would use :)
Brilliant yes :)
>
> That way there's no global cacheline bouncing (just the _reading_ of
> a global cacheline - which will be nicely localized - on NUMA too) -
> and we will hold at most 1-2 locks at once!
>
> Something like:
>
> __cacheline_aligned DEFINE_RWLOCK(global_wrlock);
>
> DEFINE_PER_CPU(rwlock_t local_lock);
>
>
> void local_read_lock(void)
> {
> again:
> read_lock(&per_cpu(local_lock, this_cpu));
Hmm... here we can see global_wrlock locked by on writer, while
this cpu already called local_read_lock(), and calls again this
function -> Deadlock, because we hold our local_lock locked.
>
> if (unlikely(!read_can_lock(&global_wrlock))) {
> read_unlock(&per_cpu(local_lock, this_cpu));
> /*
> * Just wait for any global write activity:
> */
> read_unlock_wait(&global_wrlock);
> goto again;
> }
> }
>
> void global_write_lock(void)
> {
> write_lock(&global_wrlock);
>
> for_each_possible_cpu(i)
> write_unlock_wait(&per_cpu(local_lock, i));
> }
>
> Note how nesting friendly this construct is: we dont actually _hold_
> NR_CPUS locks all at once, we simply cycle through all CPUs and make
> sure they have our attention.
>
> No preempt overflow. No lockdep explosion. A very fast and scalable
> read path.
>
> Okay - we need to implement read_unlock_wait() and
> write_unlock_wait() which is similar to spin_unlock_wait(). The
> trivial first-approximation is:
>
> read_unlock_wait(x)
> {
> read_lock(x);
> read_unlock(x);
> }
>
> write_unlock_wait(x)
> {
> write_lock(x);
> write_unlock(x);
> }
>
Very interesting and could be changed to use spinlock + depth per cpu.
-> we can detect recursion and avoid the deadlock, and we only use one
atomic operation per lock/unlock pair in fastpath (this was the reason we
tried hard to use a percpu spinlock during this thread)
__cacheline_aligned DEFINE_RWLOCK(global_wrlock);
struct ingo_local_lock {
spinlock_t lock;
int depth;
};
DEFINE_PER_CPU(struct ingo_local_lock local_lock);
void local_read_lock(void)
{
struct ingo_local_lock *lck;
local_bh_and_preempt_disable();
lck = &get_cpu_var(local_lock);
if (++lck->depth > 0) /* already locked */
return;
again:
spin_lock(&lck->lock);
if (unlikely(!read_can_lock(&global_wrlock))) {
spin_unlock(&lck->lock);
/*
* Just wait for any global write activity:
*/
read_unlock_wait(&global_wrlock);
goto again;
}
}
void global_write_lock(void)
{
write_lock(&global_wrlock);
for_each_possible_cpu(i)
spin_unlock_wait(&per_cpu(local_lock, i));
}
Hmm ?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists