lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 28 Jun 2019 11:37:42 +0800
From:   wenxu <wenxu@...oud.cn>
To:     Pablo Neira Ayuso <pablo@...filter.org>,
        Florian Westphal <fw@...len.de>
Cc:     netfilter-devel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH 2/3 nf-next] netfilter:nf_flow_table: Support bridge type
 flow offload


On 6/27/2019 8:58 PM, Pablo Neira Ayuso wrote:
> On Thu, Jun 27, 2019 at 02:22:36PM +0800, wenxu wrote:
>> On 6/27/2019 3:19 AM, Florian Westphal wrote:
>>> Florian Westphal <fw@...len.de> wrote:
> [...]
>>>> Whats the idea with this patch?
>>>>
>>>> Do you see a performance improvement when bypassing bridge layer? If so,
>>>> how much?
>>>>
>>>> I just wonder if its really cheaper than not using bridge conntrack in
>>>> the first place :-)
>> This patch is based on the conntrack function in bridge.  It will
>> bypass the fdb lookup and conntrack lookup to get the performance 
>> improvement. The more important things for hardware offload in the
>> future with nf_tables add hardware offload support
> Florian would like to see numbers / benchmark.


I just did a simple performace test with following test.

p netns add ns21
ip netns add ns22
ip l add dev veth21 type veth peer name eth0 netns ns21
ip l add dev veth22 type veth peer name eth0 netns ns22
ifconfig veth21 up
ifconfig veth22 up
ip netns exec ns21 ip a a dev eth0 10.0.0.7/24
ip netns exec ns22 ip a a dev eth0 10.0.0.8/24
ip netns exec ns21 ifconfig eth0 up
ip netns exec ns22 ifconfig eth0 up

ip l add dev br0 type bridge vlan_filtering 1
brctl addif br0 veth21
brctl addif br0 veth22

ifconfig br0 up

bridge vlan add dev veth21 vid 200 pvid untagged
bridge vlan add dev veth22 vid 200 pvid untagged

nft add table bridge firewall
nft add chain bridge firewall zones { type filter hook prerouting priority - 300 \; }
nft add rule bridge firewall zones counter ct zone set iif map { "veth21" : 2, "veth22" : 2 }

nft add chain bridge firewall rule-200-ingress
nft add rule bridge firewall rule-200-ingress ct zone 2 ct state established,related counter accept
nft add rule bridge firewall rule-200-ingress ct zone 2 ct state invalid counter drop
nft add rule bridge firewall rule-200-ingress ct zone 2 tcp dport 23 ct state new counter accept
nft add rule bridge firewall rule-200-ingress counter drop

nft add chain bridge firewall rule-200-egress
nft add rule bridge firewall rule-200-egress ct zone 2 ct state established,related counter accept
nft add rule bridge firewall rule-200-egress ct zone 2 ct state invalid counter drop
nft add rule bridge firewall rule-200-egress ct zone 2 tcp dport 23 ct state new counter drop
nft add rule bridge firewall rule-200-egress counter accept

nft add chain bridge firewall rules-all { type filter hook prerouting priority - 150 \; }
nft add rule bridge firewall rules-all counter meta protocol ip iif vmap { "veth22" : jump rule-200-ingress, "veth21" : jump rule-200-egress }



netns21 communication with ns22


ns21 iperf to 10.0.0.8 with dport 22 in ns22


first time with OFFLOAD enable

nft add flowtable bridge firewall fb2 { hook ingress priority 0 \; devices = { veth21, veth22 } \; }
nft add chain bridge firewall ftb-all {type filter hook forward priority 0 \; policy accept \; }
nft add rule bridge firewall ftb-all counter ct zone 2 ip protocol tcp flow offload @fb2

# iperf -c 10.0.0.8 -p 22 -t 60 -i2
------------------------------------------------------------
Client connecting to 10.0.0.8, TCP port 22
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.7 port 60014 connected with 10.0.0.8 port 22
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  10.8 GBytes  46.5 Gbits/sec
[  3]  2.0- 4.0 sec  10.9 GBytes  46.7 Gbits/sec
[  3]  4.0- 6.0 sec  10.9 GBytes  46.8 Gbits/sec
[  3]  6.0- 8.0 sec  11.0 GBytes  47.2 Gbits/sec
[  3]  8.0-10.0 sec  11.0 GBytes  47.1 Gbits/sec
[  3] 10.0-12.0 sec  11.0 GBytes  47.1 Gbits/sec
[  3] 12.0-14.0 sec  11.7 GBytes  50.4 Gbits/sec
[  3] 14.0-16.0 sec  12.0 GBytes  51.6 Gbits/sec
[  3] 16.0-18.0 sec  12.0 GBytes  51.6 Gbits/sec
[  3] 18.0-20.0 sec  12.0 GBytes  51.6 Gbits/sec
[  3] 20.0-22.0 sec  12.0 GBytes  51.5 Gbits/sec
[  3] 22.0-24.0 sec  12.0 GBytes  51.4 Gbits/sec
[  3] 24.0-26.0 sec  12.0 GBytes  51.3 Gbits/sec
[  3] 26.0-28.0 sec  12.0 GBytes  51.7 Gbits/sec
[  3] 28.0-30.0 sec  12.0 GBytes  51.6 Gbits/sec
[  3] 30.0-32.0 sec  12.0 GBytes  51.6 Gbits/sec
[  3] 32.0-34.0 sec  12.0 GBytes  51.6 Gbits/sec
[  3] 34.0-36.0 sec  12.0 GBytes  51.5 Gbits/sec
[  3] 36.0-38.0 sec  12.0 GBytes  51.5 Gbits/sec
[  3] 38.0-40.0 sec  12.0 GBytes  51.6 Gbits/sec
[  3] 40.0-42.0 sec  12.0 GBytes  51.6 Gbits/sec
[  3] 42.0-44.0 sec  12.0 GBytes  51.5 Gbits/sec
[  3] 44.0-46.0 sec  12.0 GBytes  51.4 Gbits/sec
[  3] 46.0-48.0 sec  12.0 GBytes  51.4 Gbits/sec
[  3] 48.0-50.0 sec  12.0 GBytes  51.5 Gbits/sec
[  3] 50.0-52.0 sec  12.0 GBytes  51.6 Gbits/sec
[  3] 52.0-54.0 sec  12.0 GBytes  51.6 Gbits/sec
[  3] 54.0-56.0 sec  12.0 GBytes  51.5 Gbits/sec
[  3] 56.0-58.0 sec  11.9 GBytes  51.2 Gbits/sec
[  3] 58.0-60.0 sec  11.8 GBytes  50.7 Gbits/sec
[  3]  0.0-60.0 sec   353 GBytes  50.5 Gbits/sec


The second time on any offload:
# iperf -c 10.0.0.8 -p 22 -t 60 -i2
------------------------------------------------------------
Client connecting to 10.0.0.8, TCP port 22
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.7 port 60536 connected with 10.0.0.8 port 22
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  8.88 GBytes  38.1 Gbits/sec
[  3]  2.0- 4.0 sec  9.02 GBytes  38.7 Gbits/sec
[  3]  4.0- 6.0 sec  9.02 GBytes  38.8 Gbits/sec
[  3]  6.0- 8.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3]  8.0-10.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3] 10.0-12.0 sec  9.04 GBytes  38.8 Gbits/sec
[  3] 12.0-14.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3] 14.0-16.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3] 16.0-18.0 sec  9.06 GBytes  38.9 Gbits/sec
[  3] 18.0-20.0 sec  9.07 GBytes  39.0 Gbits/sec
[  3] 20.0-22.0 sec  9.07 GBytes  38.9 Gbits/sec
[  3] 22.0-24.0 sec  9.06 GBytes  38.9 Gbits/sec
[  3] 24.0-26.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3] 26.0-28.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3] 28.0-30.0 sec  9.06 GBytes  38.9 Gbits/sec
[  3] 30.0-32.0 sec  9.06 GBytes  38.9 Gbits/sec
[  3] 32.0-34.0 sec  9.07 GBytes  38.9 Gbits/sec
[  3] 34.0-36.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3] 36.0-38.0 sec  9.03 GBytes  38.8 Gbits/sec
[  3] 38.0-40.0 sec  9.03 GBytes  38.8 Gbits/sec
[  3] 40.0-42.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3] 42.0-44.0 sec  9.03 GBytes  38.8 Gbits/sec
[  3] 44.0-46.0 sec  9.04 GBytes  38.8 Gbits/sec
[  3] 46.0-48.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3] 48.0-50.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3] 50.0-52.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3] 52.0-54.0 sec  9.06 GBytes  38.9 Gbits/sec
[  3] 54.0-56.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3] 56.0-58.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3] 58.0-60.0 sec  9.05 GBytes  38.9 Gbits/sec
[  3]  0.0-60.0 sec   271 GBytes  38.8 Gbits/sec




>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ