lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+aC4kv66bvt5mREv-YXVddD9wdgAiT=heo9xhdkkVmuLOyrBw@mail.gmail.com>
Date:	Thu, 29 Aug 2013 11:08:24 -0500
From:	Anthony Liguori <anthony@...emonkey.ws>
To:	Qin Chuanyu <qinchuanyu@...wei.com>
Cc:	"Michael S. Tsirkin" <mst@...hat.com>, jasowang@...hat.com,
	KVM list <kvm@...r.kernel.org>, netdev@...r.kernel.org,
	qianhuibin@...wei.com,
	"xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>
Subject: Re: Is fallback vhost_net to qemu for live migrate available?

Hi Qin,

On Mon, Aug 26, 2013 at 10:32 PM, Qin Chuanyu <qinchuanyu@...wei.com> wrote:
> Hi all
>
> I am participating in a project which try to port vhost_net on Xen。

Neat!

> By change the memory copy and notify mechanism ,currently virtio-net with
> vhost_net could run on Xen with good performance。

I think the key in doing this would be to implement a property
ioeventfd and irqfd interface in the driver domain kernel.  Just
hacking vhost_net with Xen specific knowledge would be pretty nasty
IMHO.

Did you modify the front end driver to do grant table mapping or is
this all being done by mapping the domain's memory?

> TCP receive throughput of
> single vnic from 2.77Gbps up to 6Gps。In VM receive side,I instead grant_copy
> with grant_map + memcopy,it efficiently reduce the cost of grant_table
> spin_lock of dom0,So the hole server TCP performance from 5.33Gps up to
> 9.5Gps。
>
> Now I am consider the live migrate of vhost_net on Xen,vhost_net use
> vhost_log for live migrate on Kvm,but qemu on Xen havn't manage the hole
> memory of VM,So I am trying to fallback datapath from vhost_net to qemu when
> doing live migrate ,and fallback datapath from qemu to
> vhost_net again after vm migrate to new server。

KVM and Xen represent memory in a very different way.  KVM can only
track when guest mode code dirties memory.  It relies on QEMU to track
when guest memory is dirtied by QEMU.  Since vhost is running outside
of QEMU, vhost also needs to tell QEMU when it has dirtied memory.

I don't think this is a problem with Xen though.  I believe (although
could be wrong) that Xen is able to track when either the domain or
dom0 dirties memory.

So I think you can simply ignore the dirty logging with vhost and it
should Just Work.

>
> My question is:
>         why didn't vhost_net do the same fallback operation for live migrate
> on KVM,but use vhost_log to mark the dirty page?
>         Is there any mechanism fault for the idea of fallback datapath from
> vhost_net to qemu for live migrate?

No, we don't have a mechanism to fallback  to QEMU for the datapath.
It would be possible but I think it's a bad idea to mix and match the
two.

Regards,

Anthony Liguori

> any question about the detail of vhost_net on Xen is welcome。
>
> Thanks
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ