[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <06e4a534-d15d-4b17-b548-4927d42152e1@huawei.com>
Date: Fri, 17 Mar 2023 10:37:22 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Jakub Kicinski <kuba@...nel.org>, Ronak Doshi <doshir@...are.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
Pv-drivers <Pv-drivers@...are.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
Guolin Yang <gyang@...are.com>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net] vmxnet3: use gro callback when UPT is enabled
On 2023/3/17 4:34, Jakub Kicinski wrote:
> On Thu, 16 Mar 2023 05:21:42 +0000 Ronak Doshi wrote:
>> Below are some sample test numbers collected by our perf team.
>> Test socket & msg size base using only gro
>> 1VM 14vcpu UDP stream receive 256K 256 bytes (packets/sec) 217.01 Kps 187.98 Kps -13.37%
>> 16VM 2vcpu TCP stream send Thpt 8K 256 bytes (Gbps) 18.00 Gbps 17.02 Gbps -5.44%
>> 1VM 14vcpu ResponseTimeMean Receive (in micro secs) 163 us 170 us -4.29%
>
> A bit more than I suspected, thanks for the data.
Maybe we do some investigation to find out why the performace lost is more than
suspected first.
For example if LRO'ed skb is added in gro_list->list, and then new LRO'ed skb from
the same flow only go through the whole GSO processing only to find out we have to
flush out the old LRO'ed in the gro_list->list, and add new LRO'ed skb in gro_list->list
again?
> .
>
Powered by blists - more mailing lists