[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f7tfs2ymi8y.fsf@redhat.com>
Date: Thu, 28 Sep 2023 11:26:21 -0400
From: Aaron Conole <aconole@...hat.com>
To: Nicholas Piggin <npiggin@...il.com>
Cc: netdev@...r.kernel.org, dev@...nvswitch.org, Ilya Maximets
<imaximet@...hat.com>, Eelco Chaudron <echaudro@...hat.com>, Flavio
Leitner <fbl@...hat.com>
Subject: Re: [ovs-dev] [RFC PATCH 4/7] net: openvswitch: ovs_vport_receive
reduce stack usage
Nicholas Piggin <npiggin@...il.com> writes:
> Dynamically allocating the sw_flow_key reduces stack usage of
> ovs_vport_receive from 544 bytes to 64 bytes at the cost of
> another GFP_ATOMIC allocation in the receive path.
>
> XXX: is this a problem with memory reserves if ovs is in a
> memory reclaim path, or since we have a skb allocated, is it
> okay to use some GFP_ATOMIC reserves?
>
> Signed-off-by: Nicholas Piggin <npiggin@...il.com>
> ---
This represents a fairly large performance hit. Just my own quick
testing on a system using two netns, iperf3, and simple forwarding rules
shows between 2.5% and 4% performance reduction on x86-64. Note that it
is a simple case, and doesn't involve a more involved scenario like
multiple bridges, tunnels, and internal ports. I suspect such cases
will see even bigger hit.
I don't know the impact of the other changes, but just an FYI that the
performance impact of this change is extremely noticeable on x86
platform.
----
ip netns add left
ip netns add right
ip link add eth0 type veth peer name l0
ip link set eth0 netns left
ip netns exec left ip addr add 172.31.110.1/24 dev eth0
ip netns exec left ip link set eth0 up
ip link set l0 up
ip link add eth0 type veth peer name r0
ip link set eth0 netns right
ip netns exec right ip addr add 172.31.110.2/24 dev eth0
ip netns exec right ip link set eth0 up
ip link set r0 up
python3 ovs-dpctl.py add-dp br0
python3 ovs-dpctl.py add-if br0 l0
python3 ovs-dpctl.py add-if br0 r0
python3 ovs-dpctl.py add-flow \
br0 'in_port(1),eth(),eth_type(0x806),arp()' 2
python3 ovs-dpctl.py add-flow \
br0 'in_port(2),eth(),eth_type(0x806),arp()' 1
python3 ovs-dpctl.py add-flow \
br0 'in_port(1),eth(),eth_type(0x800),ipv4()' 2
python3 ovs-dpctl.py add-flow \
br0 'in_port(2),eth(),eth_type(0x800),ipv4()' 1
----
ex results without this patch:
[root@...d-netdev60 ~]# ip netns exec right ./git/iperf/src/iperf3 -c 172.31.110.1
...
[ 5] 0.00-10.00 sec 46.7 GBytes 40.2 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 46.7 GBytes 40.2 Gbits/sec receiver
ex results with this patch:
[root@...d-netdev60 ~]# ip netns exec right ./git/iperf/src/iperf3 -c 172.31.110.1
...
[ 5] 0.00-10.00 sec 44.9 GBytes 38.6 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 44.9 GBytes 38.6 Gbits/sec receiver
I did testing with udp at various bandwidths and this tcp testing.
> net/openvswitch/vport.c | 14 +++++++++++---
> 1 file changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/net/openvswitch/vport.c b/net/openvswitch/vport.c
> index 972ae01a70f7..ddba3e00832b 100644
> --- a/net/openvswitch/vport.c
> +++ b/net/openvswitch/vport.c
> @@ -494,9 +494,13 @@ u32 ovs_vport_find_upcall_portid(const struct vport *vport,
> int ovs_vport_receive(struct vport *vport, struct sk_buff *skb,
> const struct ip_tunnel_info *tun_info)
> {
> - struct sw_flow_key key;
> + struct sw_flow_key *key;
> int error;
>
> + key = kmalloc(sizeof(*key), GFP_ATOMIC);
> + if (!key)
> + return -ENOMEM;
> +
> OVS_CB(skb)->input_vport = vport;
> OVS_CB(skb)->mru = 0;
> OVS_CB(skb)->cutlen = 0;
> @@ -510,12 +514,16 @@ int ovs_vport_receive(struct vport *vport, struct sk_buff *skb,
> }
>
> /* Extract flow from 'skb' into 'key'. */
> - error = ovs_flow_key_extract(tun_info, skb, &key);
> + error = ovs_flow_key_extract(tun_info, skb, key);
> if (unlikely(error)) {
> kfree_skb(skb);
> + kfree(key);
> return error;
> }
> - ovs_dp_process_packet(skb, &key);
> + ovs_dp_process_packet(skb, key);
> +
> + kfree(key);
> +
> return 0;
> }
Powered by blists - more mailing lists