[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090417001304.GA21657@linux.vnet.ibm.com>
Date: Thu, 16 Apr 2009 17:13:04 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Eric Dumazet <dada1@...mosbay.com>
Cc: Patrick McHardy <kaber@...sh.net>,
Stephen Hemminger <shemminger@...tta.com>,
David Miller <davem@...emloft.net>, jeff.chua.linux@...il.com,
paulus@...ba.org, mingo@...e.hu, laijs@...fujitsu.com,
jengelh@...ozas.de, r000n@...0n.net, linux-kernel@...r.kernel.org,
netfilter-devel@...r.kernel.org, netdev@...r.kernel.org,
benh@...nel.crashing.org
Subject: Re: [PATCH] netfilter: use per-cpu recursive spinlock (v6)
On Thu, Apr 16, 2009 at 08:41:58PM +0200, Eric Dumazet wrote:
>
>
> Paul E. McKenney a écrit :
> >
> > But if some other CPU holds the lock, this code would fail to wait for
> > that other CPU to release the lock, right? It also might corrupt the
> > rl->count field due to two CPUs accessing it concurrently non-atomically.
>
> If another cpu holds the lock, this cpu will spin on its own lock.
>
> Remember other cpus dont touch rl->count. This is a private field, only touched
> by the cpu on its own per_cpu data. There is no possible 'corruption'
Ah!!! I must confess that I didn't make it that far into the code...
> So the owner of the per_cpu data does :
>
> /*
> * disable preemption, get rl = &__get_cpu_var(arp_tables_lock);
> * then :
> */
> lock_time :
> if (++rl->count == 0)
> spin_lock(&rl->lock);
>
> unlock_time:
> if (likely(--rl->count == 0))
> spin_unlock(&rl->lock);
>
>
> while other cpus only do :
>
> spin_lock(&rl->lock);
> /* work on data */
> spin_unlock(&rl->lock);
>
> So they cannot corrupt 'count' stuff.
OK, that does function. Hurts my head, though. ;-)
> > I suggest the following, preferably in a function or macro or something:
> >
> > cur_cpu = smp_processor_id();
> > if (likely(rl->owner != cur_cpu) {
> > spin_lock(&rl->lock);
> > rl->owner = smp_processor_id();
> > rl->count = 1;
> > } else {
> > rl->count++;
> > }
> >
> > And the inverse for unlock.
> >
> > Or am I missing something subtle?
>
> Apparently Linus missed it too, and reacted badly to my mail.
> I dont know why we discuss of this stuff on lkml either...
Encapsulating them so that they appear in the same place might (or might
not) have gotten the fact that you were not doing a recursive lock
through my head. Even so, the name "rec_lock" might have overwhelmed
the code structure in my mind. ;-)
> I stop working on this subject and consider drinking dome hard stuf and
> watching tv :)
That -is- extreme! ;-)
Thanx, Paul
> See you
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists