[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0904151659360.4042@localhost.localdomain>
Date: Wed, 15 Apr 2009 17:02:57 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: David Miller <davem@...emloft.net>
cc: dada1@...mosbay.com, shemminger@...tta.com, kaber@...sh.net,
jeff.chua.linux@...il.com, paulmck@...ux.vnet.ibm.com,
paulus@...ba.org, mingo@...e.hu, laijs@...fujitsu.com,
jengelh@...ozas.de, r000n@...0n.net, linux-kernel@...r.kernel.org,
netfilter-devel@...r.kernel.org, netdev@...r.kernel.org,
benh@...nel.crashing.org
Subject: Re: [PATCH] netfilter: use per-cpu spinlock rather than RCU (v3)
On Wed, 15 Apr 2009, David Miller wrote:
>
> I really think we should entertain the idea where we don't RCU quiesce
> when adding rules. That was dismissed as not workable because the new
> rule must be "visible" as soon as we return to userspace but let's get
> real, effectively it will be.
I never understood that dismissal.
The new rule _will_ be visible as we return to user space. It's just that
old packets may still be in flight in other queues.
But that is true even _without_ the "synchronize_net()". The old packets
just had to make it slightly further in the queueing - but as far as user
space is concerned, there is absolutely _zero_ difference between the two.
In both cases it may see packets queued with the old rules.
> I almost cringed when the per-spinlock idea was proposed, but per-cpu
> rwlocks just takes things too far for my tastes.
I really personally would prefer the RCU approach too. I don't think
rwlocks are any more cringe-worthy than spinlocks, although it is true
that they tend to be slightly more expensive.
The pure RCU "just get rid of the unnecessary 'serialze_net()'" approach
seems to be clearly superior to either.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists