[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180313134139.GD31828@breakpoint.cc>
Date: Tue, 13 Mar 2018 14:41:39 +0100
From: Florian Westphal <fw@...len.de>
To: David Miller <davem@...emloft.net>
Cc: nbd@....name, pablo@...filter.org, netfilter-devel@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [PATCH 00/30] Netfilter/IPVS updates for net-next
David Miller <davem@...emloft.net> wrote:
> From: Felix Fietkau <nbd@....name>
> Date: Mon, 12 Mar 2018 20:30:01 +0100
>
> > It's not dead and useless. In its current state, it has a software fast
> > path that significantly improves nftables routing/NAT throughput,
> > especially on embedded devices.
> > On some devices, I've seen "only" 20% throughput improvement (along with
> > CPU usage reduction), on others it's quite a bit lot more. This is
> > without any extra drivers or patches aside from what's posted.
>
> I wonder if this software fast path has the exploitability problems that
> things like the ipv4 routing cache and the per-cpu flow cache both had.
No, entries in the flow table are backed by an entry in the conntrack
table, and that has an upper ceiling.
As decision of when an entry gets placed into the flow table is
configureable via ruleset (nftables, iptables will be coming too), one
can tie the 'fastpathing' to almost-arbitrary criterion, e.g.
'only flows from trusted internal network'
'only flows that saw two-way communication'
'only flows that sent more than 100kbyte'
or any combination thereof.
Do you see another problem that needs to be addressed?
Powered by blists - more mailing lists