[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <da80025f-6942-615f-570e-5005a25eb147@linux.vnet.ibm.com>
Date: Thu, 26 Oct 2017 13:53:12 -0400
From: Matthew Rosato <mjrosato@...ux.vnet.ibm.com>
To: Wei Xu <wexu@...hat.com>
Cc: Jason Wang <jasowang@...hat.com>, mst@...hat.com,
netdev@...r.kernel.org, davem@...emloft.net
Subject: Re: Regression in throughput between kvm guests over virtual bridge
>
> Are you using the same binding as mentioned in previous mail sent by you? it
> might be caused by cpu convention between pktgen and vhost, could you please
> try to run pktgen from another idle cpu by adjusting the binding?
I don't think that's the case -- I can cause pktgen to hang in the guest
without any cpu binding, and with vhost disabled even.
> BTW, did you see any improvement when running pktgen from the host if no
> regression was found? Since this can be reproduced with only 1 vcpu for
> guest, may you try this bind? This might help simplify the problem.
> vcpu0 -> cpu2
> vhost -> cpu3
> pktgen -> cpu1
>
Yes -- I ran the pktgen test from host to guest with the binding
described. I see an approx 5% increase in throughput from 4.12->4.13.
Some numbers:
host-4.12: 1384486.2pps 663.8MB/sec
host-4.13: 1434598.6pps 688.2MB/sec
Powered by blists - more mailing lists