[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0904161353370.4042@localhost.localdomain>
Date:	Thu, 16 Apr 2009 14:02:42 -0700 (PDT)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Stephen Hemminger <shemminger@...tta.com>
cc:	Eric Dumazet <dada1@...mosbay.com>, paulmck@...ux.vnet.ibm.com,
	Patrick McHardy <kaber@...sh.net>,
	David Miller <davem@...emloft.net>, jeff.chua.linux@...il.com,
	paulus@...ba.org, mingo@...e.hu, laijs@...fujitsu.com,
	jengelh@...ozas.de, r000n@...0n.net, linux-kernel@...r.kernel.org,
	netfilter-devel@...r.kernel.org, netdev@...r.kernel.org,
	benh@...nel.crashing.org
Subject: Re: [PATCH[] netfilter: use per-cpu reader-writer lock (v0.7)
On Thu, 16 Apr 2009, Stephen Hemminger wrote:
>
> This version of x_tables (ip/ip6/arp) locking uses a per-cpu
> rwlock that can be nested. It is sort of like earlier brwlock 
> (fast reader, slow writer). The locking is isolated so future improvements
> can concentrate on measuring/optimizing xt_table_info_lock. I tried
> other versions based on recursive spin locks and sequence counters and 
> for me, the risk of inventing own locking primitives not worth it at this time.
This is stil scary.
Do we guarantee that read-locks nest in the presense of a waiting writer 
on another CPU? Now, I know we used to (ie readers always nested happily 
with readers even if there were pending writers), and then we broke it. I 
don't know that we ever unbroke it.
IOW, at least at some point we deadlocked on this (due to trying to be 
fair, and not lettign in readers while earlier writers were waiting):
	CPU#1			CPU#2
	read_lock
				write_lock
				.. spins with write bit set, waiting for
				   readers to go away ..
	recursive read_lock
	.. spins due to the write bit
	   being. BOOM: deadlock  ..
Now, I _think_ we avoid this, but somebody should double-check.
Also, I have still yet to hear the answer to why we care about stale 
counters of dead rules so much that we couldn't just free them later with 
RCU.
			Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Powered by blists - more mailing lists
 
