[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201114115906.GA21025@salvia>
Date: Sat, 14 Nov 2020 12:59:06 +0100
From: Pablo Neira Ayuso <pablo@...filter.org>
To: Jakub Kicinski <kuba@...nel.org>
Cc: netfilter-devel@...r.kernel.org, davem@...emloft.net,
netdev@...r.kernel.org, razor@...ckwall.org, jeremy@...zel.net
Subject: Re: [PATCH net-next,v3 0/9] netfilter: flowtable bridge and vlan
enhancements
On Fri, Nov 13, 2020 at 05:55:56PM -0800, Jakub Kicinski wrote:
> On Wed, 11 Nov 2020 20:37:28 +0100 Pablo Neira Ayuso wrote:
> > The following patchset augments the Netfilter flowtable fastpath [1] to
> > support for network topologies that combine IP forwarding, bridge and
> > VLAN devices.
> >
> > A typical scenario that can benefit from this infrastructure is composed
> > of several VMs connected to bridge ports where the bridge master device
> > 'br0' has an IP address. A DHCP server is also assumed to be running to
> > provide connectivity to the VMs. The VMs reach the Internet through
> > 'br0' as default gateway, which makes the packet enter the IP forwarding
> > path. Then, netfilter is used to NAT the packets before they leave
> > through the wan device.
> >
> > Something like this:
> >
> > fast path
> > .------------------------.
> > / \
> > | IP forwarding |
> > | / \ .
> > | br0 eth0
> > . / \
> > -- veth1 veth2
> > .
> > .
> > .
> > eth0
> > ab:cd:ef:ab:cd:ef
> > VM
> >
> > The idea is to accelerate forwarding by building a fast path that takes
> > packets from the ingress path of the bridge port and place them in the
> > egress path of the wan device (and vice versa). Hence, skipping the
> > classic bridge and IP stack paths.
>
> The problem that immediately comes to mind is that if there is any
> dynamic forwarding state the cache you're creating would need to be
> flushed when FDB changes. Are you expecting users would plug into the
> flowtable devices where they know things are fairly static?
If any of the flowtable device goes down / removed, the entries are
removed from the flowtable. This means packets of existing flows are
pushed up back to classic bridge / forwarding path to re-evaluate the
fast path.
For each new flow, the fast path that is selected freshly, so they use
the up-to-date FDB to select a new bridge port.
Existing flows still follow the old path. The same happens with FIB
currently.
It should be possible to explore purging entries in the flowtable that
are stale due to changes in the topology (either in FDB or FIB).
What scenario do you have specifically in mind? Something like VM
migrates from one bridge port to another?
Thank you.
Powered by blists - more mailing lists