lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 22 Nov 2020 14:51:18 +0000 From: Alexander Lobakin <alobakin@...me> To: Pablo Neira Ayuso <pablo@...filter.org> Cc: Alexander Lobakin <alobakin@...me>, netfilter-devel@...r.kernel.org, davem@...emloft.net, netdev@...r.kernel.org, kuba@...nel.org, fw@...len.de, razor@...ckwall.org, jeremy@...zel.net, tobias@...dekranz.com, linux-kernel@...r.kernel.org Subject: Re: [PATCH net-next,v5 0/9] netfilter: flowtable bridge and vlan enhancements From: Pablo Neira Ayuso <pablo@...filter.org> Date: Sun, 22 Nov 2020 12:42:19 +0100 > On Sun, Nov 22, 2020 at 10:26:16AM +0000, Alexander Lobakin wrote: >> From: Pablo Neira Ayuso <pablo@...filter.org> >> Date: Fri, 20 Nov 2020 13:49:12 +0100 > [...] >>> Something like this: >>> >>> fast path >>> .------------------------. >>> / \ >>> | IP forwarding | >>> | / \ . >>> | br0 eth0 >>> . / \ >>> -- veth1 veth2 >>> . >>> . >>> . >>> eth0 >>> ab:cd:ef:ab:cd:ef >>> VM >> >> I'm concerned about bypassing vlan and bridge's .ndo_start_xmit() in >> case of this shortcut. We'll have incomplete netdevice Tx stats for >> these two, as it gets updated inside this callbacks. > > TX device stats are being updated accordingly. > > # ip netns exec nsr1 ip -s link > 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > RX: bytes packets errors dropped overrun mcast > 0 0 0 0 0 0 > TX: bytes packets errors dropped carrier collsns > 0 0 0 0 0 0 > 2: veth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 > link/ether 82:0d:f3:b5:59:5d brd ff:ff:ff:ff:ff:ff link-netns ns1 > RX: bytes packets errors dropped overrun mcast > 213290848248 4869765 0 0 0 0 > TX: bytes packets errors dropped carrier collsns > 315346667 4777953 0 0 0 0 > 3: veth1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 > link/ether 4a:81:2d:9a:02:88 brd ff:ff:ff:ff:ff:ff link-netns ns2 > RX: bytes packets errors dropped overrun mcast > 315337919 4777833 0 0 0 0 > TX: bytes packets errors dropped carrier collsns > 213290844826 4869708 0 0 0 0 > 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 > link/ether 82:0d:f3:b5:59:5d brd ff:ff:ff:ff:ff:ff > RX: bytes packets errors dropped overrun mcast > 4101 73 0 0 0 0 > TX: bytes packets errors dropped carrier collsns > 5256 74 0 0 0 0 Aren't these counters very low for br0, despite that br0 is an intermediate point of traffic flow? > 5: veth0.10@...h0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000 > link/ether 82:0d:f3:b5:59:5d brd ff:ff:ff:ff:ff:ff > RX: bytes packets errors dropped overrun mcast > 4101 73 0 0 0 62 > TX: bytes packets errors dropped carrier collsns > 315342363 4777893 0 0 0 0
Powered by blists - more mailing lists