[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171031070717.wcbgrp6thrjmtrh3@Wei-Dev>
Date: Tue, 31 Oct 2017 15:07:17 +0800
From: Wei Xu <wexu@...hat.com>
To: Matthew Rosato <mjrosato@...ux.vnet.ibm.com>
Cc: Jason Wang <jasowang@...hat.com>, mst@...hat.com,
netdev@...r.kernel.org, davem@...emloft.net
Subject: Re: Regression in throughput between kvm guests over virtual bridge
On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote:
>
> >
> > Are you using the same binding as mentioned in previous mail sent by you? it
> > might be caused by cpu convention between pktgen and vhost, could you please
> > try to run pktgen from another idle cpu by adjusting the binding?
>
> I don't think that's the case -- I can cause pktgen to hang in the guest
> without any cpu binding, and with vhost disabled even.
Yes, I did a test and it also hangs in guest, before we figure it out,
maybe you try udp with uperf with this case?
VM -> Host
Host -> VM
VM -> VM
>
> > BTW, did you see any improvement when running pktgen from the host if no
> > regression was found? Since this can be reproduced with only 1 vcpu for
> > guest, may you try this bind? This might help simplify the problem.
> > vcpu0 -> cpu2
> > vhost -> cpu3
> > pktgen -> cpu1
> >
>
> Yes -- I ran the pktgen test from host to guest with the binding
> described. I see an approx 5% increase in throughput from 4.12->4.13.
> Some numbers:
>
> host-4.12: 1384486.2pps 663.8MB/sec
> host-4.13: 1434598.6pps 688.2MB/sec
That's great, at least we are aligned in this case.
Jason, any thoughts on this?
Wei
>
Powered by blists - more mailing lists