lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <522174D7.6080903@huawei.com>
Date:	Sat, 31 Aug 2013 12:45:11 +0800
From:	Qin Chuanyu <qinchuanyu@...wei.com>
To:	Anthony Liguori <anthony@...emonkey.ws>
CC:	"Michael S. Tsirkin" <mst@...hat.com>, <jasowang@...hat.com>,
	KVM list <kvm@...r.kernel.org>, <netdev@...r.kernel.org>,
	<qianhuibin@...wei.com>,
	"xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>,
	<wangfuhai@...wei.com>, <likunyun@...wei.com>,
	<liuyongan@...wei.com>, <liuyingdong@...wei.com>
Subject: Re: Is fallback vhost_net to qemu for live migrate available?

On 2013/8/30 0:08, Anthony Liguori wrote:
> Hi Qin,

>> By change the memory copy and notify mechanism ,currently virtio-net with
>> vhost_net could run on Xen with good performance。
>
> I think the key in doing this would be to implement a property
> ioeventfd and irqfd interface in the driver domain kernel.  Just
> hacking vhost_net with Xen specific knowledge would be pretty nasty
> IMHO.
>
Yes, I add a kernel module which persist virtio-net pio_addr and msix 
address as what kvm module did. Guest wake up vhost thread by adding a 
hook func in evtchn_interrupt.

> Did you modify the front end driver to do grant table mapping or is
> this all being done by mapping the domain's memory?
>
There is nothing changed in front end driver. Currently I use 
alloc_vm_area to get address space, and map the domain's memory as what 
what qemu did.

> KVM and Xen represent memory in a very different way.  KVM can only
> track when guest mode code dirties memory.  It relies on QEMU to track
> when guest memory is dirtied by QEMU.  Since vhost is running outside
> of QEMU, vhost also needs to tell QEMU when it has dirtied memory.
>
> I don't think this is a problem with Xen though.  I believe (although
> could be wrong) that Xen is able to track when either the domain or
> dom0 dirties memory.
>
> So I think you can simply ignore the dirty logging with vhost and it
> should Just Work.
>
Thanks for your advice, I have tried it, without ping, it could migrate 
successfully, but if there has skb been received, domU would crash. I 
guess that because though Xen track domU memory, but it could only track 
memory that changed in DomU. memory changed by Dom0 is out of control.

>
> No, we don't have a mechanism to fallback  to QEMU for the datapath.
> It would be possible but I think it's a bad idea to mix and match the
> two.
>
Next I would try to fallback datapath to qemu for three reason:
1: memory translate mechanism has been changed for vhost_net on Xen,so 
there would be some necessary changed needed for vhost_log in kernel.

2: I also maped IOREQ_PFN page(which is used for communication between 
qemu and Xen) in kernel notify module, so it also needed been marked as 
dirty when tx/rx exist in migrate period.

3: Most important of all, Michael S. Tsirkin said that he hadn't 
considered about vhost_net migrate on Xen,so there would be some 
changed needed in vhost_log for qemu.

fallback to qemu seems to much easier, isn't it.

Regards
Qin chuanyu


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ