[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <56710dc8-f289-0211-db97-1a1ea29e38f7@linux.vnet.ibm.com>
Date: Fri, 3 Nov 2017 00:30:12 -0400
From: Matthew Rosato <mjrosato@...ux.vnet.ibm.com>
To: Wei Xu <wexu@...hat.com>
Cc: Jason Wang <jasowang@...hat.com>, mst@...hat.com,
netdev@...r.kernel.org, davem@...emloft.net
Subject: Re: Regression in throughput between kvm guests over virtual bridge
On 10/31/2017 03:07 AM, Wei Xu wrote:
> On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote:
>>
>>>
>>> Are you using the same binding as mentioned in previous mail sent by you? it
>>> might be caused by cpu convention between pktgen and vhost, could you please
>>> try to run pktgen from another idle cpu by adjusting the binding?
>>
>> I don't think that's the case -- I can cause pktgen to hang in the guest
>> without any cpu binding, and with vhost disabled even.
>
> Yes, I did a test and it also hangs in guest, before we figure it out,
> maybe you try udp with uperf with this case?
>
> VM -> Host
> Host -> VM
> VM -> VM
>
Here are averaged run numbers (Gbps throughput) across 4.12, 4.13 and
net-next with and without Jason's recent "vhost_net: conditionally
enable tx polling" applied (referred to as 'patch' below). 1 uperf
instance in each case:
uperf TCP:
4.12 4.13 4.13+patch net-next net-next+patch
----------------------------------------------------------------------
VM->VM 35.2 16.5 20.84 22.2 24.36
VM->Host 42.15 43.57 44.90 30.83 32.26
Host->VM 53.17 41.51 42.18 37.05 37.30
uperf UDP:
4.12 4.13 4.13+patch net-next net-next+patch
----------------------------------------------------------------------
VM->VM 24.93 21.63 25.09 8.86 9.62
VM->Host 40.21 38.21 39.72 8.74 9.35
Host->VM 31.26 30.18 31.25 7.2 9.26
The net is that Jason's recent patch definitely improves things across
the board at 4.13 as well as at net-next -- But the VM<->VM TCP numbers
I am observing are still lower than base 4.12.
A separate concern is why my UDP numbers look so bad on net-next (have
not bisected this yet).
Powered by blists - more mailing lists