[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171012183132.qrbgnmvki6lpgt4a@Wei-Dev>
Date: Fri, 13 Oct 2017 02:31:32 +0800
From: Wei Xu <wexu@...hat.com>
To: Matthew Rosato <mjrosato@...ux.vnet.ibm.com>
Cc: Jason Wang <jasowang@...hat.com>, netdev@...r.kernel.org,
davem@...emloft.net, mst@...hat.com
Subject: Re: Regression in throughput between kvm guests over virtual bridge
On Thu, Oct 05, 2017 at 04:07:45PM -0400, Matthew Rosato wrote:
>
> Ping... Jason, any other ideas or suggestions?
Hi Matthew,
Recently I am doing similar test on x86 for this patch, here are some,
differences between our testbeds.
1. It is nice you have got improvement with 50+ instances(or connections here?)
which would be quite helpful to address the issue, also you've figured out the
cost(wait/wakeup), kindly reminder did you pin uperf client/server along the whole
path besides vhost and vcpu threads?
2. It might be useful to short the traffic path as a reference, What I am running
is briefly like:
pktgen(host kernel) -> tap(x) -> guest(DPDK testpmd)
The bridge driver(br_forward(), etc) might impact performance due to my personal
experience, so eventually I settled down with this simplified testbed which fully
isolates the traffic from both userspace and host kernel stack(1 and 50 instances,
bridge driver, etc), therefore reduces potential interferences.
The down side of this is that it needs DPDK support in guest, has this ever be
run on s390x guest? An alternative approach is to directly run XDP drop on
virtio-net nic in guest, while this requires compiling XDP inside guest which needs
a newer distro(Fedora 25+ in my case or Ubuntu 16.10, not sure).
3. BTW, did you enable hugepage for your guest? It would performance more
or less depends on the memory demand when generating traffic, I didn't see
similar command lines in yours.
Hope this doesn't make it more complicated for you.:) We will keep working on this
and update you.
Thanks,
Wei
>
Powered by blists - more mailing lists