[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091206141805.GA3043@ami.dom.local>
Date: Sun, 6 Dec 2009 15:18:05 +0100
From: Jarek Poplawski <jarkao2@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: [PATCH] inetpeer: optimizations
On Sun, Dec 06, 2009 at 09:47:03AM +0100, Eric Dumazet wrote:
> Jarek Poplawski a écrit :
> > Eric Dumazet wrote, On 12/05/2009 01:11 PM:
> >> void inet_putpeer(struct inet_peer *p)
> >> {
> >> - spin_lock_bh(&inet_peer_unused_lock);
> >> - if (atomic_dec_and_test(&p->refcnt)) {
> >> - list_add_tail(&p->unused, &unused_peers);
> >> + local_bh_disable();
> >> + if (atomic_dec_and_lock(&p->refcnt, &unused_peers.lock)) {
> >
> > Why not:
> > if (atomic_dec_and_test(&p->refcnt)) {
> > spin_lock_bh(&inet_peer_unused_lock);
> > ...
>
> Because we have to take the lock before doing the final 1 -> 0 refcount transition.
>
> (Another thread could do the 0 -> 1 transition)
>
> I'll cook a followup patch to also avoid taking the lock in the 1+ -> 2+ transitions.
I see... So it's this concept of atomic refcounts with locking, which
I can't get used to. Anyway, since local_bh_disable/enable() are more
than one or two asm instructions, and this all is about optimization,
it seems to me it's worth to avoid it with one of these:
a) additional atomic test under the lock after unlocked
atomic_dec_and_test(),
b) implementing atomic_dec_and_lock_bh(),
c) if there are are problems with b), open code it here.
Thanks,
Jarek P.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists