lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 20 Feb 2023 11:04:01 +0800
From:   Eddy Tao <taoyuan_eddy@...mail.com>
To:     Simon Horman <simon.horman@...igine.com>
Cc:     netdev@...r.kernel.org, Pravin B Shelar <pshelar@....org>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>, dev@...nvswitch.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next v1 1/1] net: openvswitch: ovs_packet_cmd_execute
 put sw_flow mainbody in stack

Hi, Simon:

     To have better visibility of the effect of the patch, i did another 
test below

Disabling data-path flow installation to steer traffic to slow path 
only, thus I can observe the performance on slow path, where 
ovs_packet_cmd_execute is extensively used


Testing topology

             |-----|
       nic1--|     |--nic1
       nic2--|     |--nic2
VM1(16cpus) | ovs |   VM2(16 cpus)
       nic3--|     |--nic3
       nic4--|     |--nic4
             |-----|
2 netperf client threads on each vnic

netperf -H $peer -p $((port+$i)) -t TCP_STREAM  -l 60
netperf -H $peer -p $((port+$i)) -t TCP_RR  -l 60 -- -R 1 -r 120,240
netperf -H $peer -p $((port+$i)) -t TCP_CRR -l 60 -- -R 1 -r 120,240

   Mode Iterations   Variance    Average

TCP_STREAM     10      %3.83       1433 ==> before the change
TCP_STREAM     10      %3.39       1504 ==> after  the change

TCP_RR         10      %2.35      45145 ==> before the change
TCP_RR         10      %1.06      47250 ==> after  the change

TCP_CRR        10      %0.54      11310 ==> before the change
TCP_CRR        10      %2.64      12741 ==> after  the change


Considering the size and simplicity of the patch, i would say the 
performance benefit is decent.

Thanks

eddy


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ