[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e963f5ae-3a6f-f736-634b-831a9092d8d8@nbd.name>
Date: Mon, 12 Mar 2018 21:22:07 +0100
From: Felix Fietkau <nbd@....name>
To: David Miller <davem@...emloft.net>
Cc: pablo@...filter.org, netfilter-devel@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [PATCH 00/30] Netfilter/IPVS updates for net-next
On 2018-03-12 21:01, David Miller wrote:
> From: Felix Fietkau <nbd@....name>
> Date: Mon, 12 Mar 2018 20:30:01 +0100
>
>> It's not dead and useless. In its current state, it has a software fast
>> path that significantly improves nftables routing/NAT throughput,
>> especially on embedded devices.
>> On some devices, I've seen "only" 20% throughput improvement (along with
>> CPU usage reduction), on others it's quite a bit lot more. This is
>> without any extra drivers or patches aside from what's posted.
>
> I wonder if this software fast path has the exploitability problems that
> things like the ipv4 routing cache and the per-cpu flow cache both had.
> And the reason for which both were removed.
>
> I don't see how you can avoid this problem.
>
> I'm willing to be shown otherwise :-)
I don't think it suffers from the same issues, and if it does, it's a
lot easier to mitigate. The ruleset can easily be configured to only
offload connections that transferred a certain amount of data, handling
only bulk flows.
It's easy to put an upper limit on the number of offloaded connections,
and there's nothing in the code that just creates an offload entry per
packet or per lookup or something like that.
If you have other concerns, I'm sure we can address them with follow-up
patches, but as it stands, I think the code is already quite useful.
- Felix
Powered by blists - more mailing lists