[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0904220821240.3101@localhost.localdomain>
Date: Wed, 22 Apr 2009 08:32:08 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Stephen Hemminger <shemminger@...tta.com>
cc: Ingo Molnar <mingo@...e.hu>, Paul Mackerras <paulus@...ba.org>,
paulmck@...ux.vnet.ibm.com, Eric Dumazet <dada1@...mosbay.com>,
Evgeniy Polyakov <zbr@...emap.net>,
David Miller <davem@...emloft.net>, kaber@...sh.net,
jeff.chua.linux@...il.com, laijs@...fujitsu.com,
jengelh@...ozas.de, r000n@...0n.net, linux-kernel@...r.kernel.org,
netfilter-devel@...r.kernel.org, netdev@...r.kernel.org,
benh@...nel.crashing.org, mathieu.desnoyers@...ymtl.ca
Subject: Re: [PATCH] netfilter: use per-cpu recursive lock (v13)
On Tue, 21 Apr 2009, Stephen Hemminger wrote:
>
> This version of x_tables (ip/ip6/arp) locking uses a per-cpu
> recursive lock that can be nested.
Ack on the code.
But the comment is _still_ crap. Please update. It's not a recursive lock,
as clearly shown by the code itself. It's a per-cpu read-write lock, and
only the reader is "recursive" (but that's how read-write locks with in
Linux, and that has nothing to do with anything).
So make the explanations match the code and the intent. Write it something
like
This version of x_tables (ip/ip6/arp) locking uses a per-cpu
reader-writer lock lock where the readers can nest.
and don't confuse it with incorrect commit messages. The lock is very much
not recursive - on purpose - for half the people taking it.
[ That, btw, was always true, even in the original random open-coded
version. Because you can't actually do a real recursive lock without
having notion of "current ownership" either by making the count be
<per-thread,per-lock> - like the BKL - or by saving the ownership
information in the lock. A plain counter simply doesn't do it. ]
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists