[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <OS3P286MB2295A23718762BB07BCB6007F5A79@OS3P286MB2295.JPNP286.PROD.OUTLOOK.COM>
Date: Sun, 19 Feb 2023 22:46:41 +0800
From: Eddy Tao <taoyuan_eddy@...mail.com>
To: Simon Horman <simon.horman@...igine.com>
Cc: netdev@...r.kernel.org, Pravin B Shelar <pshelar@....org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, dev@...nvswitch.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next v1 1/1] net: openvswitch: ovs_packet_cmd_execute
put sw_flow mainbody in stack
Hi, Simon:
Thanks for looking into this.
The revisions i proposed are complementary for the same purpose, and
also reside in the same code segment.
I named them 2 items to clarify the details. Maybe it would be better to
name them 2 steps in the same revision to avoid confusion.
And yes, i do have some performance result below
Testing topology
|-----|
nic1--| |--nic1
nic2--| |--nic2
VM1(16cpus) | ovs | VM2(16 cpus)
nic3--| |--nic3
nic4--| |--nic4
|-----|
2 netperf client threads on each vnic
netperf -H $peer -p $((port+$i)) -t UDP_RR -l 60 -- -R 1 -r 8K,8K
netperf -H $peer -p $((port+$i)) -t TCP_RR -l 60 -- -R 1 -r 120,240
netperf -H $peer -p $((port+$i)) -t TCP_CRR -l 60 -- -R 1 -r 120,240
Mode Iterations Variance Average
UDP_RR 10 %1.33 48472 ==> before the change
UDP_RR 10 %2.13 49130 ==> after the change
TCP_RR 10 %4.56 79686 ==> before the change
TCP_RR 10 %3.42 79833 ==> after the change
TCP_CRR 10 %0.16 20596 ==> before the change
TCP_CRR 10 %0.11 21179 ==> after the change
Thanks
eddy
Powered by blists - more mailing lists