[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <521C1DCF.5090202@huawei.com>
Date: Tue, 27 Aug 2013 11:32:31 +0800
From: Qin Chuanyu <qinchuanyu@...wei.com>
To: "Michael S. Tsirkin" <mst@...hat.com>, <jasowang@...hat.com>
CC: <kvm@...r.kernel.org>, <netdev@...r.kernel.org>,
<qianhuibin@...wei.com>
Subject: Is fallback vhost_net to qemu for live migrate available?
Hi all
I am participating in a project which try to port vhost_net on Xen。
By change the memory copy and notify mechanism ,currently virtio-net
with vhost_net could run on Xen with good performance。TCP receive
throughput of single vnic from 2.77Gbps up to 6Gps。In VM receive
side,I instead grant_copy with grant_map + memcopy,it efficiently
reduce the cost of grant_table spin_lock of dom0,So the hole server TCP
performance from 5.33Gps up to 9.5Gps。
Now I am consider the live migrate of vhost_net on Xen,vhost_net use
vhost_log for live migrate on Kvm,but qemu on Xen havn't manage the
hole memory of VM,So I am trying to fallback datapath from vhost_net to
qemu when doing live migrate ,and fallback datapath from qemu to
vhost_net again after vm migrate to new server。
My question is:
why didn't vhost_net do the same fallback operation for live migrate on
KVM,but use vhost_log to mark the dirty page?
Is there any mechanism fault for the idea of fallback datapath from
vhost_net to qemu for live migrate?
any question about the detail of vhost_net on Xen is welcome。
Thanks
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists