[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20201122201505.GA31257@salvia>
Date: Sun, 22 Nov 2020 21:15:05 +0100
From: Pablo Neira Ayuso <pablo@...filter.org>
To: Alexander Lobakin <alobakin@...me>
Cc: netfilter-devel@...r.kernel.org, davem@...emloft.net,
netdev@...r.kernel.org, kuba@...nel.org, fw@...len.de,
razor@...ckwall.org, jeremy@...zel.net, tobias@...dekranz.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next,v5 0/9] netfilter: flowtable bridge and vlan
enhancements
On Sun, Nov 22, 2020 at 02:51:18PM +0000, Alexander Lobakin wrote:
> From: Pablo Neira Ayuso <pablo@...filter.org>
> Date: Sun, 22 Nov 2020 12:42:19 +0100
>
> > On Sun, Nov 22, 2020 at 10:26:16AM +0000, Alexander Lobakin wrote:
> >> From: Pablo Neira Ayuso <pablo@...filter.org>
> >> Date: Fri, 20 Nov 2020 13:49:12 +0100
> > [...]
> >>> Something like this:
> >>>
> >>> fast path
> >>> .------------------------.
> >>> / \
> >>> | IP forwarding |
> >>> | / \ .
> >>> | br0 eth0
> >>> . / \
> >>> -- veth1 veth2
> >>> .
> >>> .
> >>> .
> >>> eth0
> >>> ab:cd:ef:ab:cd:ef
> >>> VM
> >>
> >> I'm concerned about bypassing vlan and bridge's .ndo_start_xmit() in
> >> case of this shortcut. We'll have incomplete netdevice Tx stats for
> >> these two, as it gets updated inside this callbacks.
> >
> > TX device stats are being updated accordingly.
> >
> > # ip netns exec nsr1 ip -s link
> > 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > RX: bytes packets errors dropped overrun mcast
> > 0 0 0 0 0 0
> > TX: bytes packets errors dropped carrier collsns
> > 0 0 0 0 0 0
> > 2: veth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
> > link/ether 82:0d:f3:b5:59:5d brd ff:ff:ff:ff:ff:ff link-netns ns1
> > RX: bytes packets errors dropped overrun mcast
> > 213290848248 4869765 0 0 0 0
> > TX: bytes packets errors dropped carrier collsns
> > 315346667 4777953 0 0 0 0
> > 3: veth1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
> > link/ether 4a:81:2d:9a:02:88 brd ff:ff:ff:ff:ff:ff link-netns ns2
> > RX: bytes packets errors dropped overrun mcast
> > 315337919 4777833 0 0 0 0
> > TX: bytes packets errors dropped carrier collsns
> > 213290844826 4869708 0 0 0 0
> > 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
> > link/ether 82:0d:f3:b5:59:5d brd ff:ff:ff:ff:ff:ff
> > RX: bytes packets errors dropped overrun mcast
> > 4101 73 0 0 0 0
> > TX: bytes packets errors dropped carrier collsns
> > 5256 74 0 0 0 0
>
> Aren't these counters very low for br0, despite that br0 is an
> intermediate point of traffic flow?
Most packets follow the flowtable fast path, which is bypassing the
br0 device. Bumping br0 stats would be misleading, it would make the
user think that the packets follow the classic bridge layer path,
while they do not. The flowtable have counters itself to allow the
user to collect stats regarding the packets that follow the fastpath.
Powered by blists - more mailing lists